May 16 00:43:14.730204 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:43:14.730225 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:43:14.730233 kernel: efi: EFI v2.70 by EDK II May 16 00:43:14.730239 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:43:14.730247 kernel: random: crng init done May 16 00:43:14.730252 kernel: ACPI: Early table checksum verification disabled May 16 00:43:14.730259 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:43:14.730266 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:43:14.730271 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730276 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730282 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730287 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730293 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730298 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730308 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730314 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730320 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:43:14.730326 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:43:14.730331 kernel: NUMA: Failed to initialise from firmware May 16 00:43:14.730343 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:43:14.730349 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 16 00:43:14.730356 kernel: Zone ranges: May 16 00:43:14.730362 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:43:14.730369 kernel: DMA32 empty May 16 00:43:14.730378 kernel: Normal empty May 16 00:43:14.730383 kernel: Movable zone start for each node May 16 00:43:14.730389 kernel: Early memory node ranges May 16 00:43:14.730395 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:43:14.730400 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:43:14.730408 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:43:14.730413 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:43:14.730419 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:43:14.730425 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:43:14.730430 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:43:14.730437 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:43:14.730445 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:43:14.730451 kernel: psci: probing for conduit method from ACPI. May 16 00:43:14.730456 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:43:14.730462 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:43:14.730470 kernel: psci: Trusted OS migration not required May 16 00:43:14.730478 kernel: psci: SMC Calling Convention v1.1 May 16 00:43:14.730484 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:43:14.730491 kernel: ACPI: SRAT not present May 16 00:43:14.730500 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:43:14.730506 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:43:14.730513 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:43:14.730519 kernel: Detected PIPT I-cache on CPU0 May 16 00:43:14.730527 kernel: CPU features: detected: GIC system register CPU interface May 16 00:43:14.730533 kernel: CPU features: detected: Hardware dirty bit management May 16 00:43:14.730539 kernel: CPU features: detected: Spectre-v4 May 16 00:43:14.730545 kernel: CPU features: detected: Spectre-BHB May 16 00:43:14.730552 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:43:14.730558 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:43:14.730566 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:43:14.730573 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:43:14.730579 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:43:14.730584 kernel: Policy zone: DMA May 16 00:43:14.730591 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:43:14.730600 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:43:14.730606 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:43:14.730612 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:43:14.730618 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:43:14.730626 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 16 00:43:14.730632 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:43:14.730640 kernel: trace event string verifier disabled May 16 00:43:14.730646 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:43:14.730653 kernel: rcu: RCU event tracing is enabled. May 16 00:43:14.730659 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:43:14.730667 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:43:14.730674 kernel: Tracing variant of Tasks RCU enabled. May 16 00:43:14.730680 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:43:14.730688 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:43:14.730694 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:43:14.730701 kernel: GICv3: 256 SPIs implemented May 16 00:43:14.730707 kernel: GICv3: 0 Extended SPIs implemented May 16 00:43:14.730713 kernel: GICv3: Distributor has no Range Selector support May 16 00:43:14.730721 kernel: Root IRQ handler: gic_handle_irq May 16 00:43:14.730727 kernel: GICv3: 16 PPIs implemented May 16 00:43:14.730733 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:43:14.730739 kernel: ACPI: SRAT not present May 16 00:43:14.730745 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:43:14.730751 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:43:14.730757 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:43:14.730763 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:43:14.730769 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:43:14.730776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:43:14.730782 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:43:14.730789 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:43:14.730805 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:43:14.730824 kernel: arm-pv: using stolen time PV May 16 00:43:14.730832 kernel: Console: colour dummy device 80x25 May 16 00:43:14.730838 kernel: ACPI: Core revision 20210730 May 16 00:43:14.730844 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:43:14.730851 kernel: pid_max: default: 32768 minimum: 301 May 16 00:43:14.730857 kernel: LSM: Security Framework initializing May 16 00:43:14.730865 kernel: SELinux: Initializing. May 16 00:43:14.730871 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:43:14.730878 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:43:14.730884 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:43:14.730890 kernel: rcu: Hierarchical SRCU implementation. May 16 00:43:14.730896 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:43:14.730903 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:43:14.730909 kernel: Remapping and enabling EFI services. May 16 00:43:14.730915 kernel: smp: Bringing up secondary CPUs ... May 16 00:43:14.730923 kernel: Detected PIPT I-cache on CPU1 May 16 00:43:14.730929 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:43:14.730936 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:43:14.730942 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:43:14.730948 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:43:14.730954 kernel: Detected PIPT I-cache on CPU2 May 16 00:43:14.730960 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:43:14.730967 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:43:14.730973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:43:14.730979 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:43:14.730987 kernel: Detected PIPT I-cache on CPU3 May 16 00:43:14.730993 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:43:14.730999 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:43:14.731006 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:43:14.731017 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:43:14.731024 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:43:14.731031 kernel: SMP: Total of 4 processors activated. May 16 00:43:14.731037 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:43:14.731044 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:43:14.731050 kernel: CPU features: detected: Common not Private translations May 16 00:43:14.731057 kernel: CPU features: detected: CRC32 instructions May 16 00:43:14.731063 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:43:14.731071 kernel: CPU features: detected: LSE atomic instructions May 16 00:43:14.731078 kernel: CPU features: detected: Privileged Access Never May 16 00:43:14.731084 kernel: CPU features: detected: RAS Extension Support May 16 00:43:14.731091 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:43:14.731097 kernel: CPU: All CPU(s) started at EL1 May 16 00:43:14.731105 kernel: alternatives: patching kernel code May 16 00:43:14.731111 kernel: devtmpfs: initialized May 16 00:43:14.731117 kernel: KASLR enabled May 16 00:43:14.731124 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:43:14.731131 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:43:14.731137 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:43:14.731144 kernel: SMBIOS 3.0.0 present. May 16 00:43:14.731150 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:43:14.731156 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:43:14.731164 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:43:14.731171 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:43:14.731178 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:43:14.731184 kernel: audit: initializing netlink subsys (disabled) May 16 00:43:14.731191 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 May 16 00:43:14.731198 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:43:14.731204 kernel: cpuidle: using governor menu May 16 00:43:14.731211 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:43:14.731218 kernel: ASID allocator initialised with 32768 entries May 16 00:43:14.731225 kernel: ACPI: bus type PCI registered May 16 00:43:14.731232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:43:14.731238 kernel: Serial: AMBA PL011 UART driver May 16 00:43:14.731245 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:43:14.731251 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:43:14.731258 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:43:14.731265 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:43:14.731271 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:43:14.731278 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:43:14.731286 kernel: ACPI: Added _OSI(Module Device) May 16 00:43:14.731292 kernel: ACPI: Added _OSI(Processor Device) May 16 00:43:14.731299 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:43:14.731305 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:43:14.731312 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:43:14.731318 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:43:14.731325 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:43:14.731331 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:43:14.731342 kernel: ACPI: Interpreter enabled May 16 00:43:14.731350 kernel: ACPI: Using GIC for interrupt routing May 16 00:43:14.731357 kernel: ACPI: MCFG table detected, 1 entries May 16 00:43:14.731364 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:43:14.731370 kernel: printk: console [ttyAMA0] enabled May 16 00:43:14.731377 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:43:14.731507 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:43:14.731568 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:43:14.731626 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:43:14.731682 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:43:14.731737 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:43:14.731745 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:43:14.731752 kernel: PCI host bridge to bus 0000:00 May 16 00:43:14.731837 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:43:14.731889 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:43:14.731939 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:43:14.731990 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:43:14.732058 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:43:14.732124 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:43:14.732184 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:43:14.732242 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:43:14.732299 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:43:14.732369 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:43:14.732428 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:43:14.732487 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:43:14.732539 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:43:14.732590 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:43:14.732642 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:43:14.732650 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:43:14.732657 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:43:14.732666 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:43:14.732672 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:43:14.732679 kernel: iommu: Default domain type: Translated May 16 00:43:14.732685 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:43:14.732692 kernel: vgaarb: loaded May 16 00:43:14.732698 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:43:14.732705 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:43:14.732712 kernel: PTP clock support registered May 16 00:43:14.732718 kernel: Registered efivars operations May 16 00:43:14.732726 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:43:14.732733 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:43:14.732740 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:43:14.732746 kernel: pnp: PnP ACPI init May 16 00:43:14.732830 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:43:14.732841 kernel: pnp: PnP ACPI: found 1 devices May 16 00:43:14.732848 kernel: NET: Registered PF_INET protocol family May 16 00:43:14.732855 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:43:14.732863 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:43:14.732870 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:43:14.732877 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:43:14.732883 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:43:14.732890 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:43:14.732896 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:43:14.732903 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:43:14.732910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:43:14.732916 kernel: PCI: CLS 0 bytes, default 64 May 16 00:43:14.732924 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:43:14.732931 kernel: kvm [1]: HYP mode not available May 16 00:43:14.732938 kernel: Initialise system trusted keyrings May 16 00:43:14.732944 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:43:14.732951 kernel: Key type asymmetric registered May 16 00:43:14.732957 kernel: Asymmetric key parser 'x509' registered May 16 00:43:14.732964 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:43:14.732971 kernel: io scheduler mq-deadline registered May 16 00:43:14.732977 kernel: io scheduler kyber registered May 16 00:43:14.732985 kernel: io scheduler bfq registered May 16 00:43:14.732992 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:43:14.732998 kernel: ACPI: button: Power Button [PWRB] May 16 00:43:14.733005 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:43:14.733066 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:43:14.733075 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:43:14.733081 kernel: thunder_xcv, ver 1.0 May 16 00:43:14.733088 kernel: thunder_bgx, ver 1.0 May 16 00:43:14.733094 kernel: nicpf, ver 1.0 May 16 00:43:14.733102 kernel: nicvf, ver 1.0 May 16 00:43:14.733172 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:43:14.733228 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:43:14 UTC (1747356194) May 16 00:43:14.733237 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:43:14.733243 kernel: NET: Registered PF_INET6 protocol family May 16 00:43:14.733250 kernel: Segment Routing with IPv6 May 16 00:43:14.733256 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:43:14.733263 kernel: NET: Registered PF_PACKET protocol family May 16 00:43:14.733271 kernel: Key type dns_resolver registered May 16 00:43:14.733277 kernel: registered taskstats version 1 May 16 00:43:14.733284 kernel: Loading compiled-in X.509 certificates May 16 00:43:14.733291 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:43:14.733297 kernel: Key type .fscrypt registered May 16 00:43:14.733303 kernel: Key type fscrypt-provisioning registered May 16 00:43:14.733310 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:43:14.733317 kernel: ima: Allocated hash algorithm: sha1 May 16 00:43:14.733323 kernel: ima: No architecture policies found May 16 00:43:14.733331 kernel: clk: Disabling unused clocks May 16 00:43:14.733345 kernel: Freeing unused kernel memory: 36480K May 16 00:43:14.733352 kernel: Run /init as init process May 16 00:43:14.733358 kernel: with arguments: May 16 00:43:14.733365 kernel: /init May 16 00:43:14.733371 kernel: with environment: May 16 00:43:14.733378 kernel: HOME=/ May 16 00:43:14.733384 kernel: TERM=linux May 16 00:43:14.733390 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:43:14.733401 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:43:14.733409 systemd[1]: Detected virtualization kvm. May 16 00:43:14.733416 systemd[1]: Detected architecture arm64. May 16 00:43:14.733423 systemd[1]: Running in initrd. May 16 00:43:14.733430 systemd[1]: No hostname configured, using default hostname. May 16 00:43:14.733437 systemd[1]: Hostname set to . May 16 00:43:14.733444 systemd[1]: Initializing machine ID from VM UUID. May 16 00:43:14.733453 systemd[1]: Queued start job for default target initrd.target. May 16 00:43:14.733460 systemd[1]: Started systemd-ask-password-console.path. May 16 00:43:14.733467 systemd[1]: Reached target cryptsetup.target. May 16 00:43:14.733473 systemd[1]: Reached target paths.target. May 16 00:43:14.733480 systemd[1]: Reached target slices.target. May 16 00:43:14.733487 systemd[1]: Reached target swap.target. May 16 00:43:14.733494 systemd[1]: Reached target timers.target. May 16 00:43:14.733501 systemd[1]: Listening on iscsid.socket. May 16 00:43:14.733510 systemd[1]: Listening on iscsiuio.socket. May 16 00:43:14.733517 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:43:14.733524 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:43:14.733531 systemd[1]: Listening on systemd-journald.socket. May 16 00:43:14.733538 systemd[1]: Listening on systemd-networkd.socket. May 16 00:43:14.733545 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:43:14.733552 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:43:14.733559 systemd[1]: Reached target sockets.target. May 16 00:43:14.733567 systemd[1]: Starting kmod-static-nodes.service... May 16 00:43:14.733575 systemd[1]: Finished network-cleanup.service. May 16 00:43:14.733581 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:43:14.733588 systemd[1]: Starting systemd-journald.service... May 16 00:43:14.733595 systemd[1]: Starting systemd-modules-load.service... May 16 00:43:14.733602 systemd[1]: Starting systemd-resolved.service... May 16 00:43:14.733609 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:43:14.733616 systemd[1]: Finished kmod-static-nodes.service. May 16 00:43:14.733623 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:43:14.733631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:43:14.733638 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:43:14.733646 kernel: audit: type=1130 audit(1747356194.732:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.733656 systemd-journald[290]: Journal started May 16 00:43:14.733695 systemd-journald[290]: Runtime Journal (/run/log/journal/2a79dc274f1a42ddb6d7692ecddceeed) is 6.0M, max 48.7M, 42.6M free. May 16 00:43:14.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.726407 systemd-modules-load[291]: Inserted module 'overlay' May 16 00:43:14.736410 systemd[1]: Started systemd-journald.service. May 16 00:43:14.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.740396 kernel: audit: type=1130 audit(1747356194.737:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.739675 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:43:14.745791 systemd-resolved[292]: Positive Trust Anchors: May 16 00:43:14.748209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:43:14.745814 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:43:14.745843 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:43:14.746661 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:43:14.758981 kernel: Bridge firewalling registered May 16 00:43:14.759001 kernel: audit: type=1130 audit(1747356194.754:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.750011 systemd-resolved[292]: Defaulting to hostname 'linux'. May 16 00:43:14.762714 kernel: audit: type=1130 audit(1747356194.758:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.755749 systemd-modules-load[291]: Inserted module 'br_netfilter' May 16 00:43:14.755929 systemd[1]: Started systemd-resolved.service. May 16 00:43:14.759745 systemd[1]: Reached target nss-lookup.target. May 16 00:43:14.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.764520 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:43:14.768741 systemd[1]: Starting dracut-cmdline.service... May 16 00:43:14.771823 kernel: audit: type=1130 audit(1747356194.764:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.771849 kernel: SCSI subsystem initialized May 16 00:43:14.777870 dracut-cmdline[308]: dracut-dracut-053 May 16 00:43:14.779607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:43:14.779628 kernel: device-mapper: uevent: version 1.0.3 May 16 00:43:14.779637 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:43:14.780130 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:43:14.785979 systemd-modules-load[291]: Inserted module 'dm_multipath' May 16 00:43:14.788047 systemd[1]: Finished systemd-modules-load.service. May 16 00:43:14.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.790641 systemd[1]: Starting systemd-sysctl.service... May 16 00:43:14.793000 kernel: audit: type=1130 audit(1747356194.788:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.798579 systemd[1]: Finished systemd-sysctl.service. May 16 00:43:14.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.802821 kernel: audit: type=1130 audit(1747356194.799:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.845831 kernel: Loading iSCSI transport class v2.0-870. May 16 00:43:14.860821 kernel: iscsi: registered transport (tcp) May 16 00:43:14.876837 kernel: iscsi: registered transport (qla4xxx) May 16 00:43:14.876893 kernel: QLogic iSCSI HBA Driver May 16 00:43:14.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.911402 systemd[1]: Finished dracut-cmdline.service. May 16 00:43:14.915980 kernel: audit: type=1130 audit(1747356194.911:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:14.913150 systemd[1]: Starting dracut-pre-udev.service... May 16 00:43:14.957854 kernel: raid6: neonx8 gen() 13580 MB/s May 16 00:43:14.974834 kernel: raid6: neonx8 xor() 10584 MB/s May 16 00:43:14.991840 kernel: raid6: neonx4 gen() 13409 MB/s May 16 00:43:15.008831 kernel: raid6: neonx4 xor() 11165 MB/s May 16 00:43:15.025826 kernel: raid6: neonx2 gen() 13063 MB/s May 16 00:43:15.042826 kernel: raid6: neonx2 xor() 10255 MB/s May 16 00:43:15.059828 kernel: raid6: neonx1 gen() 10453 MB/s May 16 00:43:15.076828 kernel: raid6: neonx1 xor() 8750 MB/s May 16 00:43:15.093824 kernel: raid6: int64x8 gen() 6262 MB/s May 16 00:43:15.110821 kernel: raid6: int64x8 xor() 3508 MB/s May 16 00:43:15.127829 kernel: raid6: int64x4 gen() 7207 MB/s May 16 00:43:15.144834 kernel: raid6: int64x4 xor() 3846 MB/s May 16 00:43:15.161831 kernel: raid6: int64x2 gen() 6142 MB/s May 16 00:43:15.178826 kernel: raid6: int64x2 xor() 3312 MB/s May 16 00:43:15.195822 kernel: raid6: int64x1 gen() 5037 MB/s May 16 00:43:15.212939 kernel: raid6: int64x1 xor() 2642 MB/s May 16 00:43:15.212953 kernel: raid6: using algorithm neonx8 gen() 13580 MB/s May 16 00:43:15.212962 kernel: raid6: .... xor() 10584 MB/s, rmw enabled May 16 00:43:15.214041 kernel: raid6: using neon recovery algorithm May 16 00:43:15.224870 kernel: xor: measuring software checksum speed May 16 00:43:15.224897 kernel: 8regs : 17202 MB/sec May 16 00:43:15.226188 kernel: 32regs : 20681 MB/sec May 16 00:43:15.226200 kernel: arm64_neon : 27644 MB/sec May 16 00:43:15.226209 kernel: xor: using function: arm64_neon (27644 MB/sec) May 16 00:43:15.301827 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:43:15.316124 systemd[1]: Finished dracut-pre-udev.service. May 16 00:43:15.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:15.318306 systemd[1]: Starting systemd-udevd.service... May 16 00:43:15.321516 kernel: audit: type=1130 audit(1747356195.316:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:15.316000 audit: BPF prog-id=7 op=LOAD May 16 00:43:15.316000 audit: BPF prog-id=8 op=LOAD May 16 00:43:15.334022 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 16 00:43:15.338908 systemd[1]: Started systemd-udevd.service. May 16 00:43:15.340693 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:43:15.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:15.353838 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation May 16 00:43:15.383712 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:43:15.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:15.385348 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:43:15.419273 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:43:15.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:15.453411 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:43:15.458295 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:43:15.458310 kernel: GPT:9289727 != 19775487 May 16 00:43:15.458319 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:43:15.458328 kernel: GPT:9289727 != 19775487 May 16 00:43:15.458351 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:43:15.458360 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:43:15.471831 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (557) May 16 00:43:15.475707 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:43:15.476731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:43:15.482682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:43:15.486076 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:43:15.489377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:43:15.491085 systemd[1]: Starting disk-uuid.service... May 16 00:43:15.497093 disk-uuid[566]: Primary Header is updated. May 16 00:43:15.497093 disk-uuid[566]: Secondary Entries is updated. May 16 00:43:15.497093 disk-uuid[566]: Secondary Header is updated. May 16 00:43:15.500182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:43:16.512829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:43:16.513015 disk-uuid[567]: The operation has completed successfully. May 16 00:43:16.536484 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:43:16.536586 systemd[1]: Finished disk-uuid.service. May 16 00:43:16.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.538495 systemd[1]: Starting verity-setup.service... May 16 00:43:16.562829 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:43:16.598542 systemd[1]: Found device dev-mapper-usr.device. May 16 00:43:16.600835 systemd[1]: Mounting sysusr-usr.mount... May 16 00:43:16.602609 systemd[1]: Finished verity-setup.service. May 16 00:43:16.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.650638 systemd[1]: Mounted sysusr-usr.mount. May 16 00:43:16.652092 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:43:16.651557 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:43:16.652297 systemd[1]: Starting ignition-setup.service... May 16 00:43:16.654814 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:43:16.660838 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:43:16.660881 kernel: BTRFS info (device vda6): using free space tree May 16 00:43:16.660890 kernel: BTRFS info (device vda6): has skinny extents May 16 00:43:16.669489 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:43:16.676254 systemd[1]: Finished ignition-setup.service. May 16 00:43:16.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.677931 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:43:16.737640 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:43:16.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.739000 audit: BPF prog-id=9 op=LOAD May 16 00:43:16.740387 systemd[1]: Starting systemd-networkd.service... May 16 00:43:16.765289 systemd-networkd[741]: lo: Link UP May 16 00:43:16.765299 systemd-networkd[741]: lo: Gained carrier May 16 00:43:16.765663 systemd-networkd[741]: Enumeration completed May 16 00:43:16.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.765755 systemd[1]: Started systemd-networkd.service. May 16 00:43:16.765881 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:43:16.767400 systemd[1]: Reached target network.target. May 16 00:43:16.767609 systemd-networkd[741]: eth0: Link UP May 16 00:43:16.767612 systemd-networkd[741]: eth0: Gained carrier May 16 00:43:16.771387 systemd[1]: Starting iscsiuio.service... May 16 00:43:16.788106 systemd[1]: Started iscsiuio.service. May 16 00:43:16.790053 systemd[1]: Starting iscsid.service... May 16 00:43:16.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.791895 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:43:16.794567 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:43:16.794567 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:43:16.794567 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:43:16.794567 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:43:16.794567 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:43:16.794567 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:43:16.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.803898 ignition[656]: Ignition 2.14.0 May 16 00:43:16.797570 systemd[1]: Started iscsid.service. May 16 00:43:16.803904 ignition[656]: Stage: fetch-offline May 16 00:43:16.803262 systemd[1]: Starting dracut-initqueue.service... May 16 00:43:16.803945 ignition[656]: no configs at "/usr/lib/ignition/base.d" May 16 00:43:16.803956 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:43:16.804095 ignition[656]: parsed url from cmdline: "" May 16 00:43:16.804098 ignition[656]: no config URL provided May 16 00:43:16.804102 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:43:16.804118 ignition[656]: no config at "/usr/lib/ignition/user.ign" May 16 00:43:16.804140 ignition[656]: op(1): [started] loading QEMU firmware config module May 16 00:43:16.804144 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:43:16.816714 systemd[1]: Finished dracut-initqueue.service. May 16 00:43:16.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.815307 ignition[656]: op(1): [finished] loading QEMU firmware config module May 16 00:43:16.818682 systemd[1]: Reached target remote-fs-pre.target. May 16 00:43:16.815328 ignition[656]: QEMU firmware config was not found. Ignoring... May 16 00:43:16.820155 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:43:16.822066 systemd[1]: Reached target remote-fs.target. May 16 00:43:16.824532 systemd[1]: Starting dracut-pre-mount.service... May 16 00:43:16.832273 systemd[1]: Finished dracut-pre-mount.service. May 16 00:43:16.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.863728 ignition[656]: parsing config with SHA512: 4538e2e149b5c0ea6addee6d63cc01192fc28b152899cef5b9583a30ee959733b4f1116a2ffafe49f65808b928e91dde7506b070f4586faa746f3cd1f1639846 May 16 00:43:16.876194 unknown[656]: fetched base config from "system" May 16 00:43:16.876211 unknown[656]: fetched user config from "qemu" May 16 00:43:16.876829 ignition[656]: fetch-offline: fetch-offline passed May 16 00:43:16.876894 ignition[656]: Ignition finished successfully May 16 00:43:16.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.878653 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:43:16.879706 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:43:16.880548 systemd[1]: Starting ignition-kargs.service... May 16 00:43:16.889465 ignition[762]: Ignition 2.14.0 May 16 00:43:16.889476 ignition[762]: Stage: kargs May 16 00:43:16.889581 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 16 00:43:16.889591 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:43:16.892035 systemd[1]: Finished ignition-kargs.service. May 16 00:43:16.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.890544 ignition[762]: kargs: kargs passed May 16 00:43:16.890593 ignition[762]: Ignition finished successfully May 16 00:43:16.894602 systemd[1]: Starting ignition-disks.service... May 16 00:43:16.901518 ignition[768]: Ignition 2.14.0 May 16 00:43:16.901530 ignition[768]: Stage: disks May 16 00:43:16.901639 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 16 00:43:16.901649 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:43:16.904396 systemd[1]: Finished ignition-disks.service. May 16 00:43:16.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.902901 ignition[768]: disks: disks passed May 16 00:43:16.906224 systemd[1]: Reached target initrd-root-device.target. May 16 00:43:16.902950 ignition[768]: Ignition finished successfully May 16 00:43:16.907589 systemd[1]: Reached target local-fs-pre.target. May 16 00:43:16.908859 systemd[1]: Reached target local-fs.target. May 16 00:43:16.910324 systemd[1]: Reached target sysinit.target. May 16 00:43:16.911686 systemd[1]: Reached target basic.target. May 16 00:43:16.914181 systemd[1]: Starting systemd-fsck-root.service... May 16 00:43:16.945780 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:43:16.950178 systemd[1]: Finished systemd-fsck-root.service. May 16 00:43:16.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:16.952215 systemd[1]: Mounting sysroot.mount... May 16 00:43:16.958831 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:43:16.959504 systemd[1]: Mounted sysroot.mount. May 16 00:43:16.960301 systemd[1]: Reached target initrd-root-fs.target. May 16 00:43:16.962767 systemd[1]: Mounting sysroot-usr.mount... May 16 00:43:16.964347 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:43:16.964455 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:43:16.964485 systemd[1]: Reached target ignition-diskful.target. May 16 00:43:16.967553 systemd[1]: Mounted sysroot-usr.mount. May 16 00:43:16.969584 systemd[1]: Starting initrd-setup-root.service... May 16 00:43:16.975267 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:43:16.980991 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 16 00:43:16.986530 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:43:16.991607 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:43:17.025842 systemd[1]: Finished initrd-setup-root.service. May 16 00:43:17.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:17.027679 systemd[1]: Starting ignition-mount.service... May 16 00:43:17.029292 systemd[1]: Starting sysroot-boot.service... May 16 00:43:17.034529 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:43:17.044783 ignition[829]: INFO : Ignition 2.14.0 May 16 00:43:17.044783 ignition[829]: INFO : Stage: mount May 16 00:43:17.046434 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:43:17.046434 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:43:17.046434 ignition[829]: INFO : mount: mount passed May 16 00:43:17.046434 ignition[829]: INFO : Ignition finished successfully May 16 00:43:17.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:17.047563 systemd[1]: Finished ignition-mount.service. May 16 00:43:17.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:17.050508 systemd[1]: Finished sysroot-boot.service. May 16 00:43:17.610159 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:43:17.623567 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (839) May 16 00:43:17.623602 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:43:17.623612 kernel: BTRFS info (device vda6): using free space tree May 16 00:43:17.624232 kernel: BTRFS info (device vda6): has skinny extents May 16 00:43:17.630058 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:43:17.631718 systemd[1]: Starting ignition-files.service... May 16 00:43:17.645364 ignition[859]: INFO : Ignition 2.14.0 May 16 00:43:17.645364 ignition[859]: INFO : Stage: files May 16 00:43:17.646996 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:43:17.646996 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:43:17.646996 ignition[859]: DEBUG : files: compiled without relabeling support, skipping May 16 00:43:17.650455 ignition[859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:43:17.650455 ignition[859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:43:17.653278 ignition[859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:43:17.653278 ignition[859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:43:17.653278 ignition[859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:43:17.653278 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 00:43:17.653278 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 16 00:43:17.651550 unknown[859]: wrote ssh authorized keys file for user: core May 16 00:43:17.820351 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:43:18.150774 systemd-networkd[741]: eth0: Gained IPv6LL May 16 00:43:18.491747 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 00:43:18.493586 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:43:18.493586 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 16 00:43:18.806760 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:43:18.882168 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:43:18.882168 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:43:18.885684 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 16 00:43:19.353642 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:43:19.785279 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:43:19.785279 ignition[859]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:43:19.789149 ignition[859]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:43:19.849576 ignition[859]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:43:19.856383 kernel: kauditd_printk_skb: 23 callbacks suppressed May 16 00:43:19.856410 kernel: audit: type=1130 audit(1747356199.851:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.851411 systemd[1]: Finished ignition-files.service. May 16 00:43:19.857819 ignition[859]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:43:19.857819 ignition[859]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:43:19.857819 ignition[859]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:43:19.857819 ignition[859]: INFO : files: files passed May 16 00:43:19.857819 ignition[859]: INFO : Ignition finished successfully May 16 00:43:19.875092 kernel: audit: type=1130 audit(1747356199.861:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.875114 kernel: audit: type=1131 audit(1747356199.861:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.875124 kernel: audit: type=1130 audit(1747356199.868:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.853364 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:43:19.857158 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:43:19.878075 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:43:19.857811 systemd[1]: Starting ignition-quench.service... May 16 00:43:19.880880 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:43:19.860641 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:43:19.860716 systemd[1]: Finished ignition-quench.service. May 16 00:43:19.866558 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:43:19.869749 systemd[1]: Reached target ignition-complete.target. May 16 00:43:19.876849 systemd[1]: Starting initrd-parse-etc.service... May 16 00:43:19.892913 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:43:19.893025 systemd[1]: Finished initrd-parse-etc.service. May 16 00:43:19.899953 kernel: audit: type=1130 audit(1747356199.893:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.899976 kernel: audit: type=1131 audit(1747356199.893:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.894758 systemd[1]: Reached target initrd-fs.target. May 16 00:43:19.900693 systemd[1]: Reached target initrd.target. May 16 00:43:19.902066 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:43:19.902882 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:43:19.912875 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:43:19.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.914497 systemd[1]: Starting initrd-cleanup.service... May 16 00:43:19.918064 kernel: audit: type=1130 audit(1747356199.912:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.922865 systemd[1]: Stopped target nss-lookup.target. May 16 00:43:19.923741 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:43:19.925231 systemd[1]: Stopped target timers.target. May 16 00:43:19.926623 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:43:19.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.926729 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:43:19.932316 kernel: audit: type=1131 audit(1747356199.927:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.928073 systemd[1]: Stopped target initrd.target. May 16 00:43:19.931762 systemd[1]: Stopped target basic.target. May 16 00:43:19.933088 systemd[1]: Stopped target ignition-complete.target. May 16 00:43:19.934508 systemd[1]: Stopped target ignition-diskful.target. May 16 00:43:19.935864 systemd[1]: Stopped target initrd-root-device.target. May 16 00:43:19.937404 systemd[1]: Stopped target remote-fs.target. May 16 00:43:19.938770 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:43:19.940258 systemd[1]: Stopped target sysinit.target. May 16 00:43:19.941564 systemd[1]: Stopped target local-fs.target. May 16 00:43:19.942924 systemd[1]: Stopped target local-fs-pre.target. May 16 00:43:19.944276 systemd[1]: Stopped target swap.target. May 16 00:43:19.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.945507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:43:19.951324 kernel: audit: type=1131 audit(1747356199.946:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.945618 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:43:19.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.946986 systemd[1]: Stopped target cryptsetup.target. May 16 00:43:19.956430 kernel: audit: type=1131 audit(1747356199.951:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.950563 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:43:19.950665 systemd[1]: Stopped dracut-initqueue.service. May 16 00:43:19.952175 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:43:19.952270 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:43:19.955939 systemd[1]: Stopped target paths.target. May 16 00:43:19.957137 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:43:19.960824 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:43:19.962153 systemd[1]: Stopped target slices.target. May 16 00:43:19.963706 systemd[1]: Stopped target sockets.target. May 16 00:43:19.965137 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:43:19.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.965223 systemd[1]: Closed iscsid.socket. May 16 00:43:19.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.966383 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:43:19.966487 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:43:19.967883 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:43:19.967978 systemd[1]: Stopped ignition-files.service. May 16 00:43:19.970255 systemd[1]: Stopping ignition-mount.service... May 16 00:43:19.971681 systemd[1]: Stopping iscsiuio.service... May 16 00:43:19.975980 systemd[1]: Stopping sysroot-boot.service... May 16 00:43:19.977196 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:43:19.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.979599 ignition[899]: INFO : Ignition 2.14.0 May 16 00:43:19.979599 ignition[899]: INFO : Stage: umount May 16 00:43:19.979599 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:43:19.979599 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:43:19.979599 ignition[899]: INFO : umount: umount passed May 16 00:43:19.979599 ignition[899]: INFO : Ignition finished successfully May 16 00:43:19.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.977338 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:43:19.978747 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:43:19.978938 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:43:19.981899 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:43:19.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.982014 systemd[1]: Stopped iscsiuio.service. May 16 00:43:19.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.985906 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:43:19.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.985987 systemd[1]: Stopped ignition-mount.service. May 16 00:43:19.989326 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:43:19.989930 systemd[1]: Stopped target network.target. May 16 00:43:20.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.991672 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:43:19.991708 systemd[1]: Closed iscsiuio.socket. May 16 00:43:19.993238 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:43:19.993284 systemd[1]: Stopped ignition-disks.service. May 16 00:43:19.994623 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:43:19.994662 systemd[1]: Stopped ignition-kargs.service. May 16 00:43:20.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.996691 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:43:20.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:19.996734 systemd[1]: Stopped ignition-setup.service. May 16 00:43:19.998204 systemd[1]: Stopping systemd-networkd.service... May 16 00:43:19.999755 systemd[1]: Stopping systemd-resolved.service... May 16 00:43:20.018000 audit: BPF prog-id=6 op=UNLOAD May 16 00:43:20.001569 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:43:20.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.001654 systemd[1]: Finished initrd-cleanup.service. May 16 00:43:20.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.007851 systemd-networkd[741]: eth0: DHCPv6 lease lost May 16 00:43:20.021000 audit: BPF prog-id=9 op=UNLOAD May 16 00:43:20.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.009099 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:43:20.009208 systemd[1]: Stopped systemd-networkd.service. May 16 00:43:20.013307 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:43:20.013410 systemd[1]: Stopped systemd-resolved.service. May 16 00:43:20.014686 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:43:20.014713 systemd[1]: Closed systemd-networkd.socket. May 16 00:43:20.016705 systemd[1]: Stopping network-cleanup.service... May 16 00:43:20.018345 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:43:20.018403 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:43:20.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.019967 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:43:20.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.020010 systemd[1]: Stopped systemd-sysctl.service. May 16 00:43:20.022258 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:43:20.022301 systemd[1]: Stopped systemd-modules-load.service. May 16 00:43:20.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.024706 systemd[1]: Stopping systemd-udevd.service... May 16 00:43:20.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.028284 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:43:20.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.032737 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:43:20.032915 systemd[1]: Stopped systemd-udevd.service. May 16 00:43:20.034719 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:43:20.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.034811 systemd[1]: Stopped network-cleanup.service. May 16 00:43:20.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.036135 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:43:20.036168 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:43:20.037648 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:43:20.037687 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:43:20.039214 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:43:20.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.039260 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:43:20.040930 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:43:20.040974 systemd[1]: Stopped dracut-cmdline.service. May 16 00:43:20.042363 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:43:20.042403 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:43:20.044701 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:43:20.045790 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:43:20.045882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 16 00:43:20.048380 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:43:20.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.048417 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:43:20.049277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:43:20.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.049315 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:43:20.051781 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:43:20.052249 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:43:20.052328 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:43:20.064037 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:43:20.064133 systemd[1]: Stopped sysroot-boot.service. May 16 00:43:20.065666 systemd[1]: Reached target initrd-switch-root.target. May 16 00:43:20.067266 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:43:20.067321 systemd[1]: Stopped initrd-setup-root.service. May 16 00:43:20.069446 systemd[1]: Starting initrd-switch-root.service... May 16 00:43:20.076267 systemd[1]: Switching root. May 16 00:43:20.093691 iscsid[746]: iscsid shutting down. May 16 00:43:20.094407 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). May 16 00:43:20.094453 systemd-journald[290]: Journal stopped May 16 00:43:22.172367 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:43:22.172420 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:43:22.172432 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:43:22.172443 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:43:22.172460 kernel: SELinux: policy capability open_perms=1 May 16 00:43:22.172470 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:43:22.172487 kernel: SELinux: policy capability always_check_network=0 May 16 00:43:22.172499 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:43:22.172513 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:43:22.172523 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:43:22.172533 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:43:22.172545 systemd[1]: Successfully loaded SELinux policy in 39.505ms. May 16 00:43:22.172559 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.088ms. May 16 00:43:22.172571 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:43:22.172583 systemd[1]: Detected virtualization kvm. May 16 00:43:22.172595 systemd[1]: Detected architecture arm64. May 16 00:43:22.172605 systemd[1]: Detected first boot. May 16 00:43:22.172619 systemd[1]: Initializing machine ID from VM UUID. May 16 00:43:22.172631 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:43:22.172642 systemd[1]: Populated /etc with preset unit settings. May 16 00:43:22.172654 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:43:22.172668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:43:22.172680 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:43:22.172691 systemd[1]: iscsid.service: Deactivated successfully. May 16 00:43:22.172701 systemd[1]: Stopped iscsid.service. May 16 00:43:22.172716 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:43:22.172728 systemd[1]: Stopped initrd-switch-root.service. May 16 00:43:22.172739 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:43:22.172751 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:43:22.172762 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:43:22.172772 systemd[1]: Created slice system-getty.slice. May 16 00:43:22.172782 systemd[1]: Created slice system-modprobe.slice. May 16 00:43:22.172813 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:43:22.172825 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:43:22.172835 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:43:22.172847 systemd[1]: Created slice user.slice. May 16 00:43:22.172857 systemd[1]: Started systemd-ask-password-console.path. May 16 00:43:22.172868 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:43:22.172878 systemd[1]: Set up automount boot.automount. May 16 00:43:22.172888 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:43:22.172898 systemd[1]: Stopped target initrd-switch-root.target. May 16 00:43:22.172909 systemd[1]: Stopped target initrd-fs.target. May 16 00:43:22.172920 systemd[1]: Stopped target initrd-root-fs.target. May 16 00:43:22.172930 systemd[1]: Reached target integritysetup.target. May 16 00:43:22.172941 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:43:22.172951 systemd[1]: Reached target remote-fs.target. May 16 00:43:22.172961 systemd[1]: Reached target slices.target. May 16 00:43:22.172972 systemd[1]: Reached target swap.target. May 16 00:43:22.172982 systemd[1]: Reached target torcx.target. May 16 00:43:22.172994 systemd[1]: Reached target veritysetup.target. May 16 00:43:22.173005 systemd[1]: Listening on systemd-coredump.socket. May 16 00:43:22.173016 systemd[1]: Listening on systemd-initctl.socket. May 16 00:43:22.173027 systemd[1]: Listening on systemd-networkd.socket. May 16 00:43:22.173038 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:43:22.173049 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:43:22.173059 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:43:22.173070 systemd[1]: Mounting dev-hugepages.mount... May 16 00:43:22.173080 systemd[1]: Mounting dev-mqueue.mount... May 16 00:43:22.173091 systemd[1]: Mounting media.mount... May 16 00:43:22.173102 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:43:22.173112 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:43:22.173124 systemd[1]: Mounting tmp.mount... May 16 00:43:22.173134 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:43:22.173145 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:43:22.173155 systemd[1]: Starting kmod-static-nodes.service... May 16 00:43:22.173165 systemd[1]: Starting modprobe@configfs.service... May 16 00:43:22.173175 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:43:22.173185 systemd[1]: Starting modprobe@drm.service... May 16 00:43:22.173196 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:43:22.173207 systemd[1]: Starting modprobe@fuse.service... May 16 00:43:22.173219 systemd[1]: Starting modprobe@loop.service... May 16 00:43:22.173231 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:43:22.173243 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:43:22.173253 systemd[1]: Stopped systemd-fsck-root.service. May 16 00:43:22.173263 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:43:22.173274 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:43:22.173284 kernel: fuse: init (API version 7.34) May 16 00:43:22.173294 systemd[1]: Stopped systemd-journald.service. May 16 00:43:22.173304 systemd[1]: Starting systemd-journald.service... May 16 00:43:22.173316 kernel: loop: module loaded May 16 00:43:22.173326 systemd[1]: Starting systemd-modules-load.service... May 16 00:43:22.173336 systemd[1]: Starting systemd-network-generator.service... May 16 00:43:22.173353 systemd[1]: Starting systemd-remount-fs.service... May 16 00:43:22.173364 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:43:22.173374 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:43:22.173385 systemd[1]: Stopped verity-setup.service. May 16 00:43:22.173395 systemd[1]: Mounted dev-hugepages.mount. May 16 00:43:22.173405 systemd[1]: Mounted dev-mqueue.mount. May 16 00:43:22.173415 systemd[1]: Mounted media.mount. May 16 00:43:22.173427 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:43:22.173438 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:43:22.173450 systemd[1]: Mounted tmp.mount. May 16 00:43:22.173462 systemd[1]: Finished kmod-static-nodes.service. May 16 00:43:22.173472 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:43:22.173488 systemd-journald[997]: Journal started May 16 00:43:22.173531 systemd-journald[997]: Runtime Journal (/run/log/journal/2a79dc274f1a42ddb6d7692ecddceeed) is 6.0M, max 48.7M, 42.6M free. May 16 00:43:20.173000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:43:20.291000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:43:20.291000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:43:20.291000 audit: BPF prog-id=10 op=LOAD May 16 00:43:20.291000 audit: BPF prog-id=10 op=UNLOAD May 16 00:43:20.291000 audit: BPF prog-id=11 op=LOAD May 16 00:43:20.291000 audit: BPF prog-id=11 op=UNLOAD May 16 00:43:20.350000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:43:20.350000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:20.350000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:43:20.351000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:43:20.351000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:20.351000 audit: CWD cwd="/" May 16 00:43:20.351000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:20.351000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:43:20.351000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:43:22.025000 audit: BPF prog-id=12 op=LOAD May 16 00:43:22.025000 audit: BPF prog-id=3 op=UNLOAD May 16 00:43:22.025000 audit: BPF prog-id=13 op=LOAD May 16 00:43:22.025000 audit: BPF prog-id=14 op=LOAD May 16 00:43:22.025000 audit: BPF prog-id=4 op=UNLOAD May 16 00:43:22.025000 audit: BPF prog-id=5 op=UNLOAD May 16 00:43:22.025000 audit: BPF prog-id=15 op=LOAD May 16 00:43:22.025000 audit: BPF prog-id=12 op=UNLOAD May 16 00:43:22.025000 audit: BPF prog-id=16 op=LOAD May 16 00:43:22.025000 audit: BPF prog-id=17 op=LOAD May 16 00:43:22.025000 audit: BPF prog-id=13 op=UNLOAD May 16 00:43:22.025000 audit: BPF prog-id=14 op=UNLOAD May 16 00:43:22.026000 audit: BPF prog-id=18 op=LOAD May 16 00:43:22.026000 audit: BPF prog-id=15 op=UNLOAD May 16 00:43:22.026000 audit: BPF prog-id=19 op=LOAD May 16 00:43:22.026000 audit: BPF prog-id=20 op=LOAD May 16 00:43:22.026000 audit: BPF prog-id=16 op=UNLOAD May 16 00:43:22.026000 audit: BPF prog-id=17 op=UNLOAD May 16 00:43:22.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.037000 audit: BPF prog-id=18 op=UNLOAD May 16 00:43:22.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.133000 audit: BPF prog-id=21 op=LOAD May 16 00:43:22.134000 audit: BPF prog-id=22 op=LOAD May 16 00:43:22.134000 audit: BPF prog-id=23 op=LOAD May 16 00:43:22.134000 audit: BPF prog-id=19 op=UNLOAD May 16 00:43:22.134000 audit: BPF prog-id=20 op=UNLOAD May 16 00:43:22.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.170000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:43:22.170000 audit[997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc92493a0 a2=4000 a3=1 items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:22.170000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:43:22.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:20.345476 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:43:22.023969 systemd[1]: Queued start job for default target multi-user.target. May 16 00:43:22.174907 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:43:20.345842 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:43:22.023983 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:43:20.345862 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:43:22.027779 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:43:20.345894 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 16 00:43:20.345903 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="skipped missing lower profile" missing profile=oem May 16 00:43:20.345935 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 16 00:43:20.345946 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 16 00:43:20.346147 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 16 00:43:20.346181 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:43:20.346193 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:43:20.346853 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 16 00:43:20.346892 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 16 00:43:20.346910 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 16 00:43:20.346924 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 16 00:43:20.346942 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 16 00:43:20.346954 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 16 00:43:21.773077 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:21Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:43:21.773339 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:21Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:43:21.773463 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:21Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:43:22.176377 systemd[1]: Finished modprobe@configfs.service. May 16 00:43:21.773631 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:21Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:43:21.773682 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:21Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 16 00:43:21.773739 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:43:21Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 16 00:43:22.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.178068 systemd[1]: Started systemd-journald.service. May 16 00:43:22.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.178862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:43:22.179035 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:43:22.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.180172 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:43:22.180340 systemd[1]: Finished modprobe@drm.service. May 16 00:43:22.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.181545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:43:22.181707 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:43:22.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.182915 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:43:22.183061 systemd[1]: Finished modprobe@fuse.service. May 16 00:43:22.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.184113 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:43:22.184267 systemd[1]: Finished modprobe@loop.service. May 16 00:43:22.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.185537 systemd[1]: Finished systemd-modules-load.service. May 16 00:43:22.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.186814 systemd[1]: Finished systemd-network-generator.service. May 16 00:43:22.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.188136 systemd[1]: Finished systemd-remount-fs.service. May 16 00:43:22.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.189730 systemd[1]: Reached target network-pre.target. May 16 00:43:22.192012 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:43:22.193887 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:43:22.194613 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:43:22.196380 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:43:22.198371 systemd[1]: Starting systemd-journal-flush.service... May 16 00:43:22.199319 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:43:22.205711 systemd-journald[997]: Time spent on flushing to /var/log/journal/2a79dc274f1a42ddb6d7692ecddceeed is 21.054ms for 1004 entries. May 16 00:43:22.205711 systemd-journald[997]: System Journal (/var/log/journal/2a79dc274f1a42ddb6d7692ecddceeed) is 8.0M, max 195.6M, 187.6M free. May 16 00:43:22.246755 systemd-journald[997]: Received client request to flush runtime journal. May 16 00:43:22.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.200310 systemd[1]: Starting systemd-random-seed.service... May 16 00:43:22.201204 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:43:22.202193 systemd[1]: Starting systemd-sysctl.service... May 16 00:43:22.204120 systemd[1]: Starting systemd-sysusers.service... May 16 00:43:22.248832 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:43:22.207918 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:43:22.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.208974 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:43:22.210114 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:43:22.212336 systemd[1]: Starting systemd-udev-settle.service... May 16 00:43:22.216293 systemd[1]: Finished systemd-random-seed.service. May 16 00:43:22.219440 systemd[1]: Reached target first-boot-complete.target. May 16 00:43:22.230497 systemd[1]: Finished systemd-sysctl.service. May 16 00:43:22.231690 systemd[1]: Finished systemd-sysusers.service. May 16 00:43:22.233917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:43:22.248003 systemd[1]: Finished systemd-journal-flush.service. May 16 00:43:22.254075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:43:22.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.621616 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:43:22.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.622000 audit: BPF prog-id=24 op=LOAD May 16 00:43:22.622000 audit: BPF prog-id=25 op=LOAD May 16 00:43:22.622000 audit: BPF prog-id=7 op=UNLOAD May 16 00:43:22.622000 audit: BPF prog-id=8 op=UNLOAD May 16 00:43:22.624132 systemd[1]: Starting systemd-udevd.service... May 16 00:43:22.643610 systemd-udevd[1036]: Using default interface naming scheme 'v252'. May 16 00:43:22.660963 systemd[1]: Started systemd-udevd.service. May 16 00:43:22.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.662000 audit: BPF prog-id=26 op=LOAD May 16 00:43:22.664836 systemd[1]: Starting systemd-networkd.service... May 16 00:43:22.681774 systemd[1]: Starting systemd-userdbd.service... May 16 00:43:22.679000 audit: BPF prog-id=27 op=LOAD May 16 00:43:22.680000 audit: BPF prog-id=28 op=LOAD May 16 00:43:22.680000 audit: BPF prog-id=29 op=LOAD May 16 00:43:22.693438 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 16 00:43:22.714201 systemd[1]: Started systemd-userdbd.service. May 16 00:43:22.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.742530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:43:22.773427 systemd-networkd[1045]: lo: Link UP May 16 00:43:22.773437 systemd-networkd[1045]: lo: Gained carrier May 16 00:43:22.777550 systemd[1]: Finished systemd-udev-settle.service. May 16 00:43:22.777943 systemd-networkd[1045]: Enumeration completed May 16 00:43:22.778359 systemd-networkd[1045]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:43:22.778560 systemd[1]: Started systemd-networkd.service. May 16 00:43:22.780861 systemd[1]: Starting lvm2-activation-early.service... May 16 00:43:22.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.782872 systemd-networkd[1045]: eth0: Link UP May 16 00:43:22.782881 systemd-networkd[1045]: eth0: Gained carrier May 16 00:43:22.796719 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:43:22.800974 systemd-networkd[1045]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:43:22.828693 systemd[1]: Finished lvm2-activation-early.service. May 16 00:43:22.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.829760 systemd[1]: Reached target cryptsetup.target. May 16 00:43:22.831782 systemd[1]: Starting lvm2-activation.service... May 16 00:43:22.835400 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:43:22.863713 systemd[1]: Finished lvm2-activation.service. May 16 00:43:22.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.864754 systemd[1]: Reached target local-fs-pre.target. May 16 00:43:22.865686 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:43:22.865718 systemd[1]: Reached target local-fs.target. May 16 00:43:22.866535 systemd[1]: Reached target machines.target. May 16 00:43:22.868757 systemd[1]: Starting ldconfig.service... May 16 00:43:22.869872 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:43:22.869926 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:43:22.871476 systemd[1]: Starting systemd-boot-update.service... May 16 00:43:22.873785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:43:22.876394 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:43:22.879970 systemd[1]: Starting systemd-sysext.service... May 16 00:43:22.881244 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) May 16 00:43:22.888860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:43:22.890248 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:43:22.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.899102 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:43:22.910909 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:43:22.911132 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:43:22.964443 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:43:22.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:22.967809 kernel: loop0: detected capacity change from 0 to 207008 May 16 00:43:22.982011 systemd-fsck[1082]: fsck.fat 4.2 (2021-01-31) May 16 00:43:22.982011 systemd-fsck[1082]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:43:22.982833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:43:22.983527 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:43:22.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.011875 kernel: loop1: detected capacity change from 0 to 207008 May 16 00:43:23.017182 (sd-sysext)[1085]: Using extensions 'kubernetes'. May 16 00:43:23.017548 (sd-sysext)[1085]: Merged extensions into '/usr'. May 16 00:43:23.035412 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:43:23.037140 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:43:23.039185 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:43:23.044404 systemd[1]: Starting modprobe@loop.service... May 16 00:43:23.045280 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:43:23.045441 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:43:23.046420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:43:23.046591 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:43:23.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.048024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:43:23.048237 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:43:23.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.049645 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:43:23.049820 systemd[1]: Finished modprobe@loop.service. May 16 00:43:23.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.051174 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:43:23.051307 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:43:23.083541 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:43:23.090509 systemd[1]: Finished ldconfig.service. May 16 00:43:23.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.159513 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:43:23.161466 systemd[1]: Mounting boot.mount... May 16 00:43:23.163482 systemd[1]: Mounting usr-share-oem.mount... May 16 00:43:23.170068 systemd[1]: Mounted boot.mount. May 16 00:43:23.173091 systemd[1]: Mounted usr-share-oem.mount. May 16 00:43:23.175242 systemd[1]: Finished systemd-sysext.service. May 16 00:43:23.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.177671 systemd[1]: Starting ensure-sysext.service... May 16 00:43:23.179763 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:43:23.183457 systemd[1]: Finished systemd-boot-update.service. May 16 00:43:23.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.186134 systemd[1]: Reloading. May 16 00:43:23.190355 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:43:23.191466 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:43:23.192929 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:43:23.229128 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-16T00:43:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:43:23.236449 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-05-16T00:43:23Z" level=info msg="torcx already run" May 16 00:43:23.281848 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:43:23.281871 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:43:23.298161 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:43:23.342000 audit: BPF prog-id=30 op=LOAD May 16 00:43:23.342000 audit: BPF prog-id=26 op=UNLOAD May 16 00:43:23.342000 audit: BPF prog-id=31 op=LOAD May 16 00:43:23.342000 audit: BPF prog-id=21 op=UNLOAD May 16 00:43:23.342000 audit: BPF prog-id=32 op=LOAD May 16 00:43:23.342000 audit: BPF prog-id=33 op=LOAD May 16 00:43:23.342000 audit: BPF prog-id=22 op=UNLOAD May 16 00:43:23.342000 audit: BPF prog-id=23 op=UNLOAD May 16 00:43:23.343000 audit: BPF prog-id=34 op=LOAD May 16 00:43:23.343000 audit: BPF prog-id=35 op=LOAD May 16 00:43:23.343000 audit: BPF prog-id=24 op=UNLOAD May 16 00:43:23.343000 audit: BPF prog-id=25 op=UNLOAD May 16 00:43:23.344000 audit: BPF prog-id=36 op=LOAD May 16 00:43:23.344000 audit: BPF prog-id=27 op=UNLOAD May 16 00:43:23.344000 audit: BPF prog-id=37 op=LOAD May 16 00:43:23.344000 audit: BPF prog-id=38 op=LOAD May 16 00:43:23.344000 audit: BPF prog-id=28 op=UNLOAD May 16 00:43:23.344000 audit: BPF prog-id=29 op=UNLOAD May 16 00:43:23.348273 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:43:23.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.353738 systemd[1]: Starting audit-rules.service... May 16 00:43:23.356166 systemd[1]: Starting clean-ca-certificates.service... May 16 00:43:23.358630 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:43:23.363000 audit: BPF prog-id=39 op=LOAD May 16 00:43:23.366213 systemd[1]: Starting systemd-resolved.service... May 16 00:43:23.370000 audit: BPF prog-id=40 op=LOAD May 16 00:43:23.373877 systemd[1]: Starting systemd-timesyncd.service... May 16 00:43:23.380627 systemd[1]: Starting systemd-update-utmp.service... May 16 00:43:23.389000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:43:23.386788 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:43:23.388547 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:43:23.390975 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:43:23.393492 systemd[1]: Starting modprobe@loop.service... May 16 00:43:23.394457 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:43:23.394613 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:43:23.395695 systemd[1]: Finished clean-ca-certificates.service. May 16 00:43:23.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.397315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:43:23.397482 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:43:23.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.398978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:43:23.399112 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:43:23.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.400743 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:43:23.400896 systemd[1]: Finished modprobe@loop.service. May 16 00:43:23.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.405475 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:43:23.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.407306 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:43:23.408888 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:43:23.411233 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:43:23.413750 systemd[1]: Starting modprobe@loop.service... May 16 00:43:23.414875 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:43:23.415028 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:43:23.416778 systemd[1]: Starting systemd-update-done.service... May 16 00:43:23.417745 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:43:23.419149 systemd[1]: Finished systemd-update-utmp.service. May 16 00:43:23.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.420662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:43:23.420824 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:43:23.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.422159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:43:23.422317 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:43:23.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.423820 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:43:23.423960 systemd[1]: Finished modprobe@loop.service. May 16 00:43:23.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.426431 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:43:23.426566 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:43:23.430262 systemd[1]: Finished systemd-update-done.service. May 16 00:43:23.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.431817 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:43:23.436144 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:43:23.438583 systemd[1]: Starting modprobe@drm.service... May 16 00:43:23.441090 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:43:23.443525 systemd[1]: Starting modprobe@loop.service... May 16 00:43:23.444751 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:43:23.444911 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:43:23.448192 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:43:23.449465 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:43:23.450933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:43:23.451048 systemd-resolved[1156]: Positive Trust Anchors: May 16 00:43:23.451067 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:43:23.451095 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:43:23.451123 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:43:23.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:43:23.458000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:43:23.458000 audit[1184]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc4b12d30 a2=420 a3=0 items=0 ppid=1152 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:43:23.458000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:43:23.459238 augenrules[1184]: No rules May 16 00:43:23.459832 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:43:23.459981 systemd[1]: Finished modprobe@drm.service. May 16 00:43:23.461474 systemd[1]: Finished audit-rules.service. May 16 00:43:23.462897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:43:23.463042 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:43:23.464443 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:43:23.464573 systemd[1]: Finished modprobe@loop.service. May 16 00:43:23.467407 systemd[1]: Started systemd-timesyncd.service. May 16 00:43:23.467714 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:43:23.468246 systemd-timesyncd[1158]: Initial clock synchronization to Fri 2025-05-16 00:43:23.854863 UTC. May 16 00:43:23.468937 systemd[1]: Finished ensure-sysext.service. May 16 00:43:23.469221 systemd-resolved[1156]: Defaulting to hostname 'linux'. May 16 00:43:23.470289 systemd[1]: Reached target time-set.target. May 16 00:43:23.471146 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:43:23.471198 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:43:23.471310 systemd[1]: Started systemd-resolved.service. May 16 00:43:23.472251 systemd[1]: Reached target network.target. May 16 00:43:23.473111 systemd[1]: Reached target nss-lookup.target. May 16 00:43:23.474192 systemd[1]: Reached target sysinit.target. May 16 00:43:23.475124 systemd[1]: Started motdgen.path. May 16 00:43:23.475911 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:43:23.477429 systemd[1]: Started logrotate.timer. May 16 00:43:23.478327 systemd[1]: Started mdadm.timer. May 16 00:43:23.479064 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:43:23.479983 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:43:23.480020 systemd[1]: Reached target paths.target. May 16 00:43:23.480835 systemd[1]: Reached target timers.target. May 16 00:43:23.482038 systemd[1]: Listening on dbus.socket. May 16 00:43:23.484188 systemd[1]: Starting docker.socket... May 16 00:43:23.488718 systemd[1]: Listening on sshd.socket. May 16 00:43:23.489906 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:43:23.491010 systemd[1]: Listening on docker.socket. May 16 00:43:23.492034 systemd[1]: Reached target sockets.target. May 16 00:43:23.492856 systemd[1]: Reached target basic.target. May 16 00:43:23.493708 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:43:23.493742 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:43:23.495093 systemd[1]: Starting containerd.service... May 16 00:43:23.497167 systemd[1]: Starting dbus.service... May 16 00:43:23.499147 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:43:23.501704 systemd[1]: Starting extend-filesystems.service... May 16 00:43:23.502750 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:43:23.504705 systemd[1]: Starting motdgen.service... May 16 00:43:23.507060 systemd[1]: Starting prepare-helm.service... May 16 00:43:23.509427 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:43:23.514253 jq[1195]: false May 16 00:43:23.511788 systemd[1]: Starting sshd-keygen.service... May 16 00:43:23.516111 systemd[1]: Starting systemd-logind.service... May 16 00:43:23.517161 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:43:23.517278 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:43:23.518215 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:43:23.519213 systemd[1]: Starting update-engine.service... May 16 00:43:23.521461 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:43:23.524925 jq[1208]: true May 16 00:43:23.526201 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:43:23.526450 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:43:23.528358 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:43:23.528582 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:43:23.544005 extend-filesystems[1196]: Found loop1 May 16 00:43:23.544005 extend-filesystems[1196]: Found vda May 16 00:43:23.544005 extend-filesystems[1196]: Found vda1 May 16 00:43:23.544005 extend-filesystems[1196]: Found vda2 May 16 00:43:23.544005 extend-filesystems[1196]: Found vda3 May 16 00:43:23.544005 extend-filesystems[1196]: Found usr May 16 00:43:23.544005 extend-filesystems[1196]: Found vda4 May 16 00:43:23.544005 extend-filesystems[1196]: Found vda6 May 16 00:43:23.544005 extend-filesystems[1196]: Found vda7 May 16 00:43:23.544005 extend-filesystems[1196]: Found vda9 May 16 00:43:23.544005 extend-filesystems[1196]: Checking size of /dev/vda9 May 16 00:43:23.561906 tar[1211]: linux-arm64/LICENSE May 16 00:43:23.561906 tar[1211]: linux-arm64/helm May 16 00:43:23.559931 dbus-daemon[1194]: [system] SELinux support is enabled May 16 00:43:23.551492 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:43:23.562353 jq[1214]: true May 16 00:43:23.551686 systemd[1]: Finished motdgen.service. May 16 00:43:23.560115 systemd[1]: Started dbus.service. May 16 00:43:23.563124 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:43:23.563169 systemd[1]: Reached target system-config.target. May 16 00:43:23.564276 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:43:23.564296 systemd[1]: Reached target user-config.target. May 16 00:43:23.570968 extend-filesystems[1196]: Resized partition /dev/vda9 May 16 00:43:23.573223 extend-filesystems[1229]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:43:23.603821 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:43:23.640719 update_engine[1207]: I0516 00:43:23.640355 1207 main.cc:92] Flatcar Update Engine starting May 16 00:43:23.641249 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:43:23.641676 systemd-logind[1203]: New seat seat0. May 16 00:43:23.652718 systemd[1]: Started systemd-logind.service. May 16 00:43:23.654338 systemd[1]: Started update-engine.service. May 16 00:43:23.657430 systemd[1]: Started locksmithd.service. May 16 00:43:23.658755 update_engine[1207]: I0516 00:43:23.658623 1207 update_check_scheduler.cc:74] Next update check in 11m55s May 16 00:43:23.662049 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:43:23.678756 extend-filesystems[1229]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:43:23.678756 extend-filesystems[1229]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:43:23.678756 extend-filesystems[1229]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:43:23.686090 bash[1245]: Updated "/home/core/.ssh/authorized_keys" May 16 00:43:23.680442 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:43:23.686230 extend-filesystems[1196]: Resized filesystem in /dev/vda9 May 16 00:43:23.680682 systemd[1]: Finished extend-filesystems.service. May 16 00:43:23.682232 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:43:23.691569 env[1215]: time="2025-05-16T00:43:23.691511200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:43:23.711682 env[1215]: time="2025-05-16T00:43:23.711626800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:43:23.712028 env[1215]: time="2025-05-16T00:43:23.712005560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:43:23.713481 env[1215]: time="2025-05-16T00:43:23.713435800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:43:23.713595 env[1215]: time="2025-05-16T00:43:23.713578720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:43:23.713938 env[1215]: time="2025-05-16T00:43:23.713899640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:43:23.714037 env[1215]: time="2025-05-16T00:43:23.714021680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:43:23.714102 env[1215]: time="2025-05-16T00:43:23.714086800Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:43:23.714152 env[1215]: time="2025-05-16T00:43:23.714139960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:43:23.714286 env[1215]: time="2025-05-16T00:43:23.714269640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:43:23.714823 env[1215]: time="2025-05-16T00:43:23.714767400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:43:23.715061 env[1215]: time="2025-05-16T00:43:23.715037520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:43:23.715131 env[1215]: time="2025-05-16T00:43:23.715116520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:43:23.715271 env[1215]: time="2025-05-16T00:43:23.715250160Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:43:23.715338 env[1215]: time="2025-05-16T00:43:23.715324360Z" level=info msg="metadata content store policy set" policy=shared May 16 00:43:23.718476 env[1215]: time="2025-05-16T00:43:23.718444440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:43:23.718625 env[1215]: time="2025-05-16T00:43:23.718608680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:43:23.718686 env[1215]: time="2025-05-16T00:43:23.718673000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:43:23.718773 env[1215]: time="2025-05-16T00:43:23.718755840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:43:23.718851 env[1215]: time="2025-05-16T00:43:23.718835560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:43:23.718912 env[1215]: time="2025-05-16T00:43:23.718898400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:43:23.718979 env[1215]: time="2025-05-16T00:43:23.718964920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:43:23.719441 env[1215]: time="2025-05-16T00:43:23.719411160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:43:23.719530 env[1215]: time="2025-05-16T00:43:23.719514200Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:43:23.719593 env[1215]: time="2025-05-16T00:43:23.719576640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:43:23.719652 env[1215]: time="2025-05-16T00:43:23.719638440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:43:23.719721 env[1215]: time="2025-05-16T00:43:23.719708360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:43:23.719969 env[1215]: time="2025-05-16T00:43:23.719948160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:43:23.720158 env[1215]: time="2025-05-16T00:43:23.720136040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:43:23.720552 env[1215]: time="2025-05-16T00:43:23.720522640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:43:23.720596 env[1215]: time="2025-05-16T00:43:23.720569200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720596 env[1215]: time="2025-05-16T00:43:23.720584640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:43:23.720752 env[1215]: time="2025-05-16T00:43:23.720738840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720778 env[1215]: time="2025-05-16T00:43:23.720755960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720778 env[1215]: time="2025-05-16T00:43:23.720768920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720852 env[1215]: time="2025-05-16T00:43:23.720781560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720874 env[1215]: time="2025-05-16T00:43:23.720850760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720874 env[1215]: time="2025-05-16T00:43:23.720865280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720911 env[1215]: time="2025-05-16T00:43:23.720876920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720911 env[1215]: time="2025-05-16T00:43:23.720890600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:43:23.720911 env[1215]: time="2025-05-16T00:43:23.720905040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:43:23.721070 env[1215]: time="2025-05-16T00:43:23.721050760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:43:23.721100 env[1215]: time="2025-05-16T00:43:23.721074240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:43:23.721100 env[1215]: time="2025-05-16T00:43:23.721088400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:43:23.721140 env[1215]: time="2025-05-16T00:43:23.721101600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:43:23.721140 env[1215]: time="2025-05-16T00:43:23.721116640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:43:23.721140 env[1215]: time="2025-05-16T00:43:23.721134440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:43:23.721206 env[1215]: time="2025-05-16T00:43:23.721151960Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:43:23.721206 env[1215]: time="2025-05-16T00:43:23.721187320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:43:23.721466 env[1215]: time="2025-05-16T00:43:23.721400920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:43:23.721466 env[1215]: time="2025-05-16T00:43:23.721463800Z" level=info msg="Connect containerd service" May 16 00:43:23.726926 env[1215]: time="2025-05-16T00:43:23.721495920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:43:23.727265 env[1215]: time="2025-05-16T00:43:23.727228800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:43:23.727729 env[1215]: time="2025-05-16T00:43:23.727694680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:43:23.727759 env[1215]: time="2025-05-16T00:43:23.727745560Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:43:23.729002 env[1215]: time="2025-05-16T00:43:23.727793000Z" level=info msg="containerd successfully booted in 0.036888s" May 16 00:43:23.727897 systemd[1]: Started containerd.service. May 16 00:43:23.729338 env[1215]: time="2025-05-16T00:43:23.729301240Z" level=info msg="Start subscribing containerd event" May 16 00:43:23.729385 env[1215]: time="2025-05-16T00:43:23.729362320Z" level=info msg="Start recovering state" May 16 00:43:23.729454 env[1215]: time="2025-05-16T00:43:23.729438480Z" level=info msg="Start event monitor" May 16 00:43:23.729485 env[1215]: time="2025-05-16T00:43:23.729465840Z" level=info msg="Start snapshots syncer" May 16 00:43:23.729485 env[1215]: time="2025-05-16T00:43:23.729477640Z" level=info msg="Start cni network conf syncer for default" May 16 00:43:23.729485 env[1215]: time="2025-05-16T00:43:23.729484800Z" level=info msg="Start streaming server" May 16 00:43:23.786538 locksmithd[1246]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:43:24.003139 tar[1211]: linux-arm64/README.md May 16 00:43:24.007928 systemd[1]: Finished prepare-helm.service. May 16 00:43:24.806387 systemd-networkd[1045]: eth0: Gained IPv6LL May 16 00:43:24.808100 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:43:24.809538 systemd[1]: Reached target network-online.target. May 16 00:43:24.812297 systemd[1]: Starting kubelet.service... May 16 00:43:25.427781 systemd[1]: Started kubelet.service. May 16 00:43:25.871794 kubelet[1263]: E0516 00:43:25.871668 1263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:43:25.874018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:43:25.874147 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:43:27.120181 systemd[1]: Created slice system-sshd.slice. May 16 00:43:27.947922 sshd_keygen[1212]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:43:27.968877 systemd[1]: Finished sshd-keygen.service. May 16 00:43:27.971597 systemd[1]: Starting issuegen.service... May 16 00:43:27.974185 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:34004.service. May 16 00:43:27.979580 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:43:27.979804 systemd[1]: Finished issuegen.service. May 16 00:43:27.982503 systemd[1]: Starting systemd-user-sessions.service... May 16 00:43:27.989339 systemd[1]: Finished systemd-user-sessions.service. May 16 00:43:27.992089 systemd[1]: Started getty@tty1.service. May 16 00:43:27.994621 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:43:27.995965 systemd[1]: Reached target getty.target. May 16 00:43:27.996917 systemd[1]: Reached target multi-user.target. May 16 00:43:27.999415 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:43:28.007222 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:43:28.007403 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:43:28.008817 systemd[1]: Startup finished in 617ms (kernel) + 5.551s (initrd) + 7.880s (userspace) = 14.049s. May 16 00:43:28.029467 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 34004 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:43:28.031864 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:43:28.041923 systemd-logind[1203]: New session 1 of user core. May 16 00:43:28.043329 systemd[1]: Created slice user-500.slice. May 16 00:43:28.044802 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:43:28.055070 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:43:28.057850 systemd[1]: Starting user@500.service... May 16 00:43:28.061329 (systemd)[1288]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:43:28.138497 systemd[1288]: Queued start job for default target default.target. May 16 00:43:28.139041 systemd[1288]: Reached target paths.target. May 16 00:43:28.139075 systemd[1288]: Reached target sockets.target. May 16 00:43:28.139087 systemd[1288]: Reached target timers.target. May 16 00:43:28.139097 systemd[1288]: Reached target basic.target. May 16 00:43:28.139148 systemd[1288]: Reached target default.target. May 16 00:43:28.139174 systemd[1288]: Startup finished in 67ms. May 16 00:43:28.139462 systemd[1]: Started user@500.service. May 16 00:43:28.140524 systemd[1]: Started session-1.scope. May 16 00:43:28.194412 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:34012.service. May 16 00:43:28.241299 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 34012 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:43:28.242693 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:43:28.246327 systemd-logind[1203]: New session 2 of user core. May 16 00:43:28.247606 systemd[1]: Started session-2.scope. May 16 00:43:28.306055 sshd[1297]: pam_unix(sshd:session): session closed for user core May 16 00:43:28.309366 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:34028.service. May 16 00:43:28.309992 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:34012.service: Deactivated successfully. May 16 00:43:28.310746 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:43:28.311301 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. May 16 00:43:28.312117 systemd-logind[1203]: Removed session 2. May 16 00:43:28.346913 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 34028 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:43:28.348207 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:43:28.351569 systemd-logind[1203]: New session 3 of user core. May 16 00:43:28.352416 systemd[1]: Started session-3.scope. May 16 00:43:28.405098 sshd[1302]: pam_unix(sshd:session): session closed for user core May 16 00:43:28.408079 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:34028.service: Deactivated successfully. May 16 00:43:28.408772 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:43:28.409316 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. May 16 00:43:28.410583 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:34036.service. May 16 00:43:28.411360 systemd-logind[1203]: Removed session 3. May 16 00:43:28.448194 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 34036 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:43:28.449516 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:43:28.455909 systemd-logind[1203]: New session 4 of user core. May 16 00:43:28.456797 systemd[1]: Started session-4.scope. May 16 00:43:28.511714 sshd[1309]: pam_unix(sshd:session): session closed for user core May 16 00:43:28.514761 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:34036.service: Deactivated successfully. May 16 00:43:28.515495 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:43:28.516049 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. May 16 00:43:28.517265 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:34046.service. May 16 00:43:28.517918 systemd-logind[1203]: Removed session 4. May 16 00:43:28.555514 sshd[1315]: Accepted publickey for core from 10.0.0.1 port 34046 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:43:28.556981 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:43:28.560983 systemd-logind[1203]: New session 5 of user core. May 16 00:43:28.561918 systemd[1]: Started session-5.scope. May 16 00:43:28.625793 sudo[1318]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:43:28.626048 sudo[1318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:43:28.689091 systemd[1]: Starting docker.service... May 16 00:43:28.778585 env[1330]: time="2025-05-16T00:43:28.778442457Z" level=info msg="Starting up" May 16 00:43:28.780483 env[1330]: time="2025-05-16T00:43:28.780312328Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:43:28.780483 env[1330]: time="2025-05-16T00:43:28.780336104Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:43:28.780483 env[1330]: time="2025-05-16T00:43:28.780357947Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:43:28.780483 env[1330]: time="2025-05-16T00:43:28.780369711Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:43:28.783209 env[1330]: time="2025-05-16T00:43:28.783166353Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:43:28.783322 env[1330]: time="2025-05-16T00:43:28.783302016Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:43:28.783390 env[1330]: time="2025-05-16T00:43:28.783373385Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:43:28.783442 env[1330]: time="2025-05-16T00:43:28.783428753Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:43:28.962650 env[1330]: time="2025-05-16T00:43:28.962586735Z" level=info msg="Loading containers: start." May 16 00:43:29.117846 kernel: Initializing XFRM netlink socket May 16 00:43:29.142469 env[1330]: time="2025-05-16T00:43:29.142425537Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 16 00:43:29.200757 systemd-networkd[1045]: docker0: Link UP May 16 00:43:29.230448 env[1330]: time="2025-05-16T00:43:29.230383723Z" level=info msg="Loading containers: done." May 16 00:43:29.250576 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2567730238-merged.mount: Deactivated successfully. May 16 00:43:29.252973 env[1330]: time="2025-05-16T00:43:29.252932006Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:43:29.253172 env[1330]: time="2025-05-16T00:43:29.253153081Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 16 00:43:29.253276 env[1330]: time="2025-05-16T00:43:29.253262820Z" level=info msg="Daemon has completed initialization" May 16 00:43:29.268389 systemd[1]: Started docker.service. May 16 00:43:29.277036 env[1330]: time="2025-05-16T00:43:29.276976289Z" level=info msg="API listen on /run/docker.sock" May 16 00:43:29.971493 env[1215]: time="2025-05-16T00:43:29.971434534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 00:43:30.577393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2865410499.mount: Deactivated successfully. May 16 00:43:31.964283 env[1215]: time="2025-05-16T00:43:31.964214689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:31.966259 env[1215]: time="2025-05-16T00:43:31.966201061Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:31.968895 env[1215]: time="2025-05-16T00:43:31.968639140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:31.970706 env[1215]: time="2025-05-16T00:43:31.970663012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:31.972699 env[1215]: time="2025-05-16T00:43:31.972640459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 16 00:43:31.973490 env[1215]: time="2025-05-16T00:43:31.973456073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 00:43:33.552882 env[1215]: time="2025-05-16T00:43:33.552838422Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:33.554871 env[1215]: time="2025-05-16T00:43:33.554836929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:33.556559 env[1215]: time="2025-05-16T00:43:33.556531111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:33.558339 env[1215]: time="2025-05-16T00:43:33.558311571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:33.559176 env[1215]: time="2025-05-16T00:43:33.559145574Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 16 00:43:33.560231 env[1215]: time="2025-05-16T00:43:33.560194419Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 00:43:34.859170 env[1215]: time="2025-05-16T00:43:34.859108276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:34.860606 env[1215]: time="2025-05-16T00:43:34.860560293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:34.862329 env[1215]: time="2025-05-16T00:43:34.862295512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:34.864083 env[1215]: time="2025-05-16T00:43:34.864048838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:34.865002 env[1215]: time="2025-05-16T00:43:34.864968967Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 16 00:43:34.865558 env[1215]: time="2025-05-16T00:43:34.865531116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 00:43:35.959162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587317068.mount: Deactivated successfully. May 16 00:43:35.960099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:43:35.960241 systemd[1]: Stopped kubelet.service. May 16 00:43:35.961631 systemd[1]: Starting kubelet.service... May 16 00:43:36.061683 systemd[1]: Started kubelet.service. May 16 00:43:36.114332 kubelet[1464]: E0516 00:43:36.114278 1464 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:43:36.116466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:43:36.116605 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:43:36.681645 env[1215]: time="2025-05-16T00:43:36.681597145Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:36.683011 env[1215]: time="2025-05-16T00:43:36.682969031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:36.684309 env[1215]: time="2025-05-16T00:43:36.684272617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:36.685721 env[1215]: time="2025-05-16T00:43:36.685682712Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:36.686368 env[1215]: time="2025-05-16T00:43:36.686318410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 16 00:43:36.687352 env[1215]: time="2025-05-16T00:43:36.687320970Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:43:37.305875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857630055.mount: Deactivated successfully. May 16 00:43:38.205356 env[1215]: time="2025-05-16T00:43:38.205309051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.206837 env[1215]: time="2025-05-16T00:43:38.206789863Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.208895 env[1215]: time="2025-05-16T00:43:38.208869995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.211469 env[1215]: time="2025-05-16T00:43:38.211426939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.212521 env[1215]: time="2025-05-16T00:43:38.212486631Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 00:43:38.213180 env[1215]: time="2025-05-16T00:43:38.213131126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:43:38.663742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167090751.mount: Deactivated successfully. May 16 00:43:38.667681 env[1215]: time="2025-05-16T00:43:38.667638454Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.669138 env[1215]: time="2025-05-16T00:43:38.669104717Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.671166 env[1215]: time="2025-05-16T00:43:38.671135161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.672599 env[1215]: time="2025-05-16T00:43:38.672561932Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:38.673164 env[1215]: time="2025-05-16T00:43:38.673136912Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 00:43:38.673660 env[1215]: time="2025-05-16T00:43:38.673635970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 00:43:39.227384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008319101.mount: Deactivated successfully. May 16 00:43:41.526943 env[1215]: time="2025-05-16T00:43:41.526882833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:41.528883 env[1215]: time="2025-05-16T00:43:41.528846685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:41.531166 env[1215]: time="2025-05-16T00:43:41.531135795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:41.533734 env[1215]: time="2025-05-16T00:43:41.533696657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:41.534764 env[1215]: time="2025-05-16T00:43:41.534716933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 16 00:43:46.196927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:43:46.197164 systemd[1]: Stopped kubelet.service. May 16 00:43:46.198488 systemd[1]: Starting kubelet.service... May 16 00:43:46.295998 systemd[1]: Started kubelet.service. May 16 00:43:46.333210 kubelet[1496]: E0516 00:43:46.333159 1496 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:43:46.335034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:43:46.335157 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:43:47.812618 systemd[1]: Stopped kubelet.service. May 16 00:43:47.814558 systemd[1]: Starting kubelet.service... May 16 00:43:47.835826 systemd[1]: Reloading. May 16 00:43:47.887742 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-05-16T00:43:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:43:47.887771 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-05-16T00:43:47Z" level=info msg="torcx already run" May 16 00:43:47.978185 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:43:47.978205 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:43:47.995749 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:43:48.072562 systemd[1]: Started kubelet.service. May 16 00:43:48.074328 systemd[1]: Stopping kubelet.service... May 16 00:43:48.074584 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:43:48.074776 systemd[1]: Stopped kubelet.service. May 16 00:43:48.076349 systemd[1]: Starting kubelet.service... May 16 00:43:48.172832 systemd[1]: Started kubelet.service. May 16 00:43:48.217367 kubelet[1575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:48.217367 kubelet[1575]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:43:48.217367 kubelet[1575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:48.217760 kubelet[1575]: I0516 00:43:48.217542 1575 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:43:48.924943 kubelet[1575]: I0516 00:43:48.924895 1575 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:43:48.924943 kubelet[1575]: I0516 00:43:48.924929 1575 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:43:48.925235 kubelet[1575]: I0516 00:43:48.925211 1575 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:43:48.978540 kubelet[1575]: E0516 00:43:48.978501 1575 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:48.980263 kubelet[1575]: I0516 00:43:48.980241 1575 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:43:48.987023 kubelet[1575]: E0516 00:43:48.986980 1575 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:43:48.987023 kubelet[1575]: I0516 00:43:48.987025 1575 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:43:48.989970 kubelet[1575]: I0516 00:43:48.989941 1575 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:43:48.990733 kubelet[1575]: I0516 00:43:48.990680 1575 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:43:48.990945 kubelet[1575]: I0516 00:43:48.990732 1575 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:43:48.991031 kubelet[1575]: I0516 00:43:48.991013 1575 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:43:48.991031 kubelet[1575]: I0516 00:43:48.991023 1575 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:43:48.991240 kubelet[1575]: I0516 00:43:48.991215 1575 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:48.993806 kubelet[1575]: I0516 00:43:48.993780 1575 kubelet.go:446] "Attempting to sync node with API server" May 16 00:43:48.993861 kubelet[1575]: I0516 00:43:48.993818 1575 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:43:48.993861 kubelet[1575]: I0516 00:43:48.993837 1575 kubelet.go:352] "Adding apiserver pod source" May 16 00:43:48.993861 kubelet[1575]: I0516 00:43:48.993847 1575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:43:49.011040 kubelet[1575]: W0516 00:43:49.010986 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:49.011215 kubelet[1575]: E0516 00:43:49.011192 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:49.011441 kubelet[1575]: W0516 00:43:49.011393 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:49.011490 kubelet[1575]: E0516 00:43:49.011451 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:49.013078 kubelet[1575]: I0516 00:43:49.013060 1575 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:43:49.013890 kubelet[1575]: I0516 00:43:49.013872 1575 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:43:49.014084 kubelet[1575]: W0516 00:43:49.014072 1575 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:43:49.015028 kubelet[1575]: I0516 00:43:49.015006 1575 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:43:49.015151 kubelet[1575]: I0516 00:43:49.015139 1575 server.go:1287] "Started kubelet" May 16 00:43:49.015976 kubelet[1575]: I0516 00:43:49.015948 1575 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:43:49.017763 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 16 00:43:49.017850 kubelet[1575]: I0516 00:43:49.016826 1575 server.go:479] "Adding debug handlers to kubelet server" May 16 00:43:49.017850 kubelet[1575]: I0516 00:43:49.015462 1575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:43:49.017850 kubelet[1575]: I0516 00:43:49.017079 1575 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:43:49.018100 kubelet[1575]: I0516 00:43:49.018080 1575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:43:49.019099 kubelet[1575]: I0516 00:43:49.018115 1575 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:43:49.019396 kubelet[1575]: I0516 00:43:49.019374 1575 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:43:49.019713 kubelet[1575]: I0516 00:43:49.019690 1575 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:43:49.019764 kubelet[1575]: I0516 00:43:49.019743 1575 reconciler.go:26] "Reconciler: start to sync state" May 16 00:43:49.019856 kubelet[1575]: I0516 00:43:49.019829 1575 factory.go:221] Registration of the systemd container factory successfully May 16 00:43:49.019942 kubelet[1575]: I0516 00:43:49.019921 1575 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:43:49.020130 kubelet[1575]: W0516 00:43:49.020091 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:49.020186 kubelet[1575]: E0516 00:43:49.020139 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:49.020488 kubelet[1575]: E0516 00:43:49.020464 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:49.020570 kubelet[1575]: E0516 00:43:49.020545 1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" May 16 00:43:49.020653 kubelet[1575]: E0516 00:43:49.020631 1575 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:43:49.021239 kubelet[1575]: I0516 00:43:49.021216 1575 factory.go:221] Registration of the containerd container factory successfully May 16 00:43:49.021936 kubelet[1575]: E0516 00:43:49.021669 1575 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fdb3b7e82ebf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:43:49.015104498 +0000 UTC m=+0.838564409,LastTimestamp:2025-05-16 00:43:49.015104498 +0000 UTC m=+0.838564409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:43:49.032561 kubelet[1575]: I0516 00:43:49.032538 1575 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:43:49.032561 kubelet[1575]: I0516 00:43:49.032554 1575 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:43:49.032670 kubelet[1575]: I0516 00:43:49.032634 1575 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:49.035679 kubelet[1575]: I0516 00:43:49.035641 1575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:43:49.036749 kubelet[1575]: I0516 00:43:49.036724 1575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:43:49.036863 kubelet[1575]: I0516 00:43:49.036850 1575 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:43:49.036954 kubelet[1575]: I0516 00:43:49.036941 1575 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:43:49.037028 kubelet[1575]: I0516 00:43:49.037018 1575 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:43:49.037127 kubelet[1575]: E0516 00:43:49.037110 1575 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:43:49.037722 kubelet[1575]: W0516 00:43:49.037674 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:49.037875 kubelet[1575]: E0516 00:43:49.037852 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:49.116545 kubelet[1575]: I0516 00:43:49.116490 1575 policy_none.go:49] "None policy: Start" May 16 00:43:49.116545 kubelet[1575]: I0516 00:43:49.116524 1575 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:43:49.116545 kubelet[1575]: I0516 00:43:49.116537 1575 state_mem.go:35] "Initializing new in-memory state store" May 16 00:43:49.120909 kubelet[1575]: E0516 00:43:49.120879 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:49.138030 kubelet[1575]: E0516 00:43:49.137994 1575 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:43:49.142013 systemd[1]: Created slice kubepods.slice. May 16 00:43:49.146122 systemd[1]: Created slice kubepods-burstable.slice. May 16 00:43:49.148693 systemd[1]: Created slice kubepods-besteffort.slice. May 16 00:43:49.159748 kubelet[1575]: I0516 00:43:49.159706 1575 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:43:49.159917 kubelet[1575]: I0516 00:43:49.159897 1575 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:43:49.159969 kubelet[1575]: I0516 00:43:49.159915 1575 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:43:49.160324 kubelet[1575]: I0516 00:43:49.160297 1575 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:43:49.161163 kubelet[1575]: E0516 00:43:49.161141 1575 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:43:49.161273 kubelet[1575]: E0516 00:43:49.161258 1575 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:43:49.221865 kubelet[1575]: E0516 00:43:49.221823 1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" May 16 00:43:49.261990 kubelet[1575]: I0516 00:43:49.261922 1575 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:43:49.262356 kubelet[1575]: E0516 00:43:49.262314 1575 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" May 16 00:43:49.345331 systemd[1]: Created slice kubepods-burstable-podce1bf7f3a9233ba5a9856322139c3a8a.slice. May 16 00:43:49.366171 kubelet[1575]: E0516 00:43:49.366095 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:49.368411 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 00:43:49.378905 kubelet[1575]: E0516 00:43:49.378787 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:49.380911 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 00:43:49.382344 kubelet[1575]: E0516 00:43:49.382317 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:49.463933 kubelet[1575]: I0516 00:43:49.463905 1575 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:43:49.464483 kubelet[1575]: E0516 00:43:49.464454 1575 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" May 16 00:43:49.521438 kubelet[1575]: I0516 00:43:49.521349 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:43:49.521592 kubelet[1575]: I0516 00:43:49.521571 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce1bf7f3a9233ba5a9856322139c3a8a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ce1bf7f3a9233ba5a9856322139c3a8a\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:49.521670 kubelet[1575]: I0516 00:43:49.521655 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:49.521747 kubelet[1575]: I0516 00:43:49.521734 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:49.521858 kubelet[1575]: I0516 00:43:49.521844 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce1bf7f3a9233ba5a9856322139c3a8a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce1bf7f3a9233ba5a9856322139c3a8a\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:49.521921 kubelet[1575]: I0516 00:43:49.521910 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce1bf7f3a9233ba5a9856322139c3a8a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce1bf7f3a9233ba5a9856322139c3a8a\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:49.521991 kubelet[1575]: I0516 00:43:49.521979 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:49.522055 kubelet[1575]: I0516 00:43:49.522044 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:49.522147 kubelet[1575]: I0516 00:43:49.522124 1575 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:49.623154 kubelet[1575]: E0516 00:43:49.623122 1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" May 16 00:43:49.666575 kubelet[1575]: E0516 00:43:49.666531 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:49.667187 env[1215]: time="2025-05-16T00:43:49.667134519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ce1bf7f3a9233ba5a9856322139c3a8a,Namespace:kube-system,Attempt:0,}" May 16 00:43:49.679904 kubelet[1575]: E0516 00:43:49.679865 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:49.680558 env[1215]: time="2025-05-16T00:43:49.680301184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 00:43:49.683042 kubelet[1575]: E0516 00:43:49.683020 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:49.683518 env[1215]: time="2025-05-16T00:43:49.683474587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 00:43:49.856400 kubelet[1575]: W0516 00:43:49.856237 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:49.856400 kubelet[1575]: E0516 00:43:49.856302 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:49.866494 kubelet[1575]: I0516 00:43:49.866444 1575 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:43:49.866769 kubelet[1575]: E0516 00:43:49.866736 1575 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" May 16 00:43:50.055312 kubelet[1575]: W0516 00:43:50.055275 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:50.055421 kubelet[1575]: E0516 00:43:50.055316 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:50.123181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653878325.mount: Deactivated successfully. May 16 00:43:50.128928 env[1215]: time="2025-05-16T00:43:50.128876401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.129994 env[1215]: time="2025-05-16T00:43:50.129952096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.130895 env[1215]: time="2025-05-16T00:43:50.130845798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.131858 env[1215]: time="2025-05-16T00:43:50.131829995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.133348 env[1215]: time="2025-05-16T00:43:50.133317468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.134978 env[1215]: time="2025-05-16T00:43:50.134954806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.138303 env[1215]: time="2025-05-16T00:43:50.138270264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.139066 env[1215]: time="2025-05-16T00:43:50.139033249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.142204 env[1215]: time="2025-05-16T00:43:50.142171520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.143728 env[1215]: time="2025-05-16T00:43:50.143700616Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.144657 env[1215]: time="2025-05-16T00:43:50.144633776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.145469 env[1215]: time="2025-05-16T00:43:50.145443432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:43:50.188912 env[1215]: time="2025-05-16T00:43:50.188848912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:50.189056 env[1215]: time="2025-05-16T00:43:50.188897145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:50.189056 env[1215]: time="2025-05-16T00:43:50.188908922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:50.189195 env[1215]: time="2025-05-16T00:43:50.189155533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b059092a940d3db0b7cdbb3de15b8fe3cbebff827ed8639d2bc1c24ff5e3b7f pid=1633 runtime=io.containerd.runc.v2 May 16 00:43:50.189261 env[1215]: time="2025-05-16T00:43:50.189165548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:50.189261 env[1215]: time="2025-05-16T00:43:50.189210295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:50.189261 env[1215]: time="2025-05-16T00:43:50.189220630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:50.189472 env[1215]: time="2025-05-16T00:43:50.189403945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a9812bbf0ac35f76846a46808932c777fe002672a227df84f9927e231b4da7a pid=1635 runtime=io.containerd.runc.v2 May 16 00:43:50.189552 env[1215]: time="2025-05-16T00:43:50.189489674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:43:50.189552 env[1215]: time="2025-05-16T00:43:50.189524166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:43:50.189619 env[1215]: time="2025-05-16T00:43:50.189538668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:43:50.189757 env[1215]: time="2025-05-16T00:43:50.189718097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38f07506b3dc4d222d2ca111e180a67ec411be337833d0517b8a1db5e0688dc2 pid=1636 runtime=io.containerd.runc.v2 May 16 00:43:50.203048 systemd[1]: Started cri-containerd-1b059092a940d3db0b7cdbb3de15b8fe3cbebff827ed8639d2bc1c24ff5e3b7f.scope. May 16 00:43:50.204104 systemd[1]: Started cri-containerd-38f07506b3dc4d222d2ca111e180a67ec411be337833d0517b8a1db5e0688dc2.scope. May 16 00:43:50.221915 systemd[1]: Started cri-containerd-8a9812bbf0ac35f76846a46808932c777fe002672a227df84f9927e231b4da7a.scope. May 16 00:43:50.274002 env[1215]: time="2025-05-16T00:43:50.273954954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b059092a940d3db0b7cdbb3de15b8fe3cbebff827ed8639d2bc1c24ff5e3b7f\"" May 16 00:43:50.275238 kubelet[1575]: E0516 00:43:50.275205 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:50.277782 env[1215]: time="2025-05-16T00:43:50.277737192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ce1bf7f3a9233ba5a9856322139c3a8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"38f07506b3dc4d222d2ca111e180a67ec411be337833d0517b8a1db5e0688dc2\"" May 16 00:43:50.278647 kubelet[1575]: E0516 00:43:50.278557 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:50.280789 env[1215]: time="2025-05-16T00:43:50.280753359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a9812bbf0ac35f76846a46808932c777fe002672a227df84f9927e231b4da7a\"" May 16 00:43:50.280947 env[1215]: time="2025-05-16T00:43:50.280888683Z" level=info msg="CreateContainer within sandbox \"1b059092a940d3db0b7cdbb3de15b8fe3cbebff827ed8639d2bc1c24ff5e3b7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:43:50.281141 env[1215]: time="2025-05-16T00:43:50.281112338Z" level=info msg="CreateContainer within sandbox \"38f07506b3dc4d222d2ca111e180a67ec411be337833d0517b8a1db5e0688dc2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:43:50.281348 kubelet[1575]: E0516 00:43:50.281326 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:50.282992 env[1215]: time="2025-05-16T00:43:50.282961474Z" level=info msg="CreateContainer within sandbox \"8a9812bbf0ac35f76846a46808932c777fe002672a227df84f9927e231b4da7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:43:50.298774 env[1215]: time="2025-05-16T00:43:50.298731949Z" level=info msg="CreateContainer within sandbox \"38f07506b3dc4d222d2ca111e180a67ec411be337833d0517b8a1db5e0688dc2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f66a4f09ed501e138605bd2e0e31f39e45dafb6ada99d61c85924e4a55334691\"" May 16 00:43:50.299604 env[1215]: time="2025-05-16T00:43:50.299570247Z" level=info msg="StartContainer for \"f66a4f09ed501e138605bd2e0e31f39e45dafb6ada99d61c85924e4a55334691\"" May 16 00:43:50.300617 env[1215]: time="2025-05-16T00:43:50.300580003Z" level=info msg="CreateContainer within sandbox \"1b059092a940d3db0b7cdbb3de15b8fe3cbebff827ed8639d2bc1c24ff5e3b7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d300f41da07883df78e9fe702315eb62022233f2d4ad75de961e877614a20ed5\"" May 16 00:43:50.300978 env[1215]: time="2025-05-16T00:43:50.300949798Z" level=info msg="StartContainer for \"d300f41da07883df78e9fe702315eb62022233f2d4ad75de961e877614a20ed5\"" May 16 00:43:50.301775 env[1215]: time="2025-05-16T00:43:50.301743310Z" level=info msg="CreateContainer within sandbox \"8a9812bbf0ac35f76846a46808932c777fe002672a227df84f9927e231b4da7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fcd0b4ad0295c4702559ca0fcd08cc7d1644e52a8e600f1e2ee1b0236f756239\"" May 16 00:43:50.302180 env[1215]: time="2025-05-16T00:43:50.302150961Z" level=info msg="StartContainer for \"fcd0b4ad0295c4702559ca0fcd08cc7d1644e52a8e600f1e2ee1b0236f756239\"" May 16 00:43:50.316327 systemd[1]: Started cri-containerd-d300f41da07883df78e9fe702315eb62022233f2d4ad75de961e877614a20ed5.scope. May 16 00:43:50.317156 systemd[1]: Started cri-containerd-f66a4f09ed501e138605bd2e0e31f39e45dafb6ada99d61c85924e4a55334691.scope. May 16 00:43:50.323060 systemd[1]: Started cri-containerd-fcd0b4ad0295c4702559ca0fcd08cc7d1644e52a8e600f1e2ee1b0236f756239.scope. May 16 00:43:50.389563 env[1215]: time="2025-05-16T00:43:50.389459309Z" level=info msg="StartContainer for \"d300f41da07883df78e9fe702315eb62022233f2d4ad75de961e877614a20ed5\" returns successfully" May 16 00:43:50.419138 env[1215]: time="2025-05-16T00:43:50.419094237Z" level=info msg="StartContainer for \"fcd0b4ad0295c4702559ca0fcd08cc7d1644e52a8e600f1e2ee1b0236f756239\" returns successfully" May 16 00:43:50.424719 kubelet[1575]: E0516 00:43:50.424674 1575 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" May 16 00:43:50.430236 env[1215]: time="2025-05-16T00:43:50.430202593Z" level=info msg="StartContainer for \"f66a4f09ed501e138605bd2e0e31f39e45dafb6ada99d61c85924e4a55334691\" returns successfully" May 16 00:43:50.443217 kubelet[1575]: W0516 00:43:50.443166 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:50.443306 kubelet[1575]: E0516 00:43:50.443235 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:50.453666 kubelet[1575]: W0516 00:43:50.453605 1575 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused May 16 00:43:50.461812 kubelet[1575]: E0516 00:43:50.453675 1575 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" May 16 00:43:50.668641 kubelet[1575]: I0516 00:43:50.668535 1575 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:43:51.043360 kubelet[1575]: E0516 00:43:51.043326 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:51.043507 kubelet[1575]: E0516 00:43:51.043448 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:51.048106 kubelet[1575]: E0516 00:43:51.048072 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:51.048205 kubelet[1575]: E0516 00:43:51.048186 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:51.051225 kubelet[1575]: E0516 00:43:51.051055 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:51.051225 kubelet[1575]: E0516 00:43:51.051158 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:52.053538 kubelet[1575]: E0516 00:43:52.053094 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:52.053538 kubelet[1575]: E0516 00:43:52.053220 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:52.053538 kubelet[1575]: E0516 00:43:52.053420 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:52.053538 kubelet[1575]: E0516 00:43:52.053493 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:52.360939 kubelet[1575]: E0516 00:43:52.360843 1575 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:43:52.403375 kubelet[1575]: I0516 00:43:52.403344 1575 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:43:52.403569 kubelet[1575]: E0516 00:43:52.403555 1575 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:43:52.414457 kubelet[1575]: E0516 00:43:52.414428 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:52.514706 kubelet[1575]: E0516 00:43:52.514666 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:52.615258 kubelet[1575]: E0516 00:43:52.615150 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:52.716479 kubelet[1575]: E0516 00:43:52.716438 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:52.817006 kubelet[1575]: E0516 00:43:52.816954 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:52.917817 kubelet[1575]: E0516 00:43:52.917660 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:53.017986 kubelet[1575]: E0516 00:43:53.017925 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:53.054717 kubelet[1575]: E0516 00:43:53.054694 1575 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:43:53.055035 kubelet[1575]: E0516 00:43:53.054839 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:53.118845 kubelet[1575]: E0516 00:43:53.118790 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:53.219683 kubelet[1575]: E0516 00:43:53.219643 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:53.320259 kubelet[1575]: E0516 00:43:53.320196 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:53.420825 kubelet[1575]: E0516 00:43:53.420770 1575 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:53.521320 kubelet[1575]: I0516 00:43:53.521212 1575 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:43:53.534314 kubelet[1575]: I0516 00:43:53.534274 1575 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:43:53.540806 kubelet[1575]: I0516 00:43:53.540760 1575 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:43:53.996734 kubelet[1575]: I0516 00:43:53.996691 1575 apiserver.go:52] "Watching apiserver" May 16 00:43:53.999002 kubelet[1575]: E0516 00:43:53.998976 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:53.999126 kubelet[1575]: E0516 00:43:53.999097 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:54.020733 kubelet[1575]: I0516 00:43:54.020681 1575 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:43:54.055863 kubelet[1575]: E0516 00:43:54.055835 1575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:54.643929 systemd[1]: Reloading. May 16 00:43:54.689144 /usr/lib/systemd/system-generators/torcx-generator[1875]: time="2025-05-16T00:43:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:43:54.689177 /usr/lib/systemd/system-generators/torcx-generator[1875]: time="2025-05-16T00:43:54Z" level=info msg="torcx already run" May 16 00:43:54.750144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:43:54.750165 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:43:54.766236 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:43:54.846378 systemd[1]: Stopping kubelet.service... May 16 00:43:54.869293 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:43:54.869500 systemd[1]: Stopped kubelet.service. May 16 00:43:54.869554 systemd[1]: kubelet.service: Consumed 1.223s CPU time. May 16 00:43:54.871787 systemd[1]: Starting kubelet.service... May 16 00:43:54.965861 systemd[1]: Started kubelet.service. May 16 00:43:55.001857 kubelet[1918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:55.002191 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:43:55.002239 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:43:55.002392 kubelet[1918]: I0516 00:43:55.002361 1918 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:43:55.008138 kubelet[1918]: I0516 00:43:55.008096 1918 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:43:55.008138 kubelet[1918]: I0516 00:43:55.008127 1918 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:43:55.008370 kubelet[1918]: I0516 00:43:55.008345 1918 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:43:55.009530 kubelet[1918]: I0516 00:43:55.009507 1918 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:43:55.011670 kubelet[1918]: I0516 00:43:55.011645 1918 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:43:55.015455 kubelet[1918]: E0516 00:43:55.015423 1918 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:43:55.015455 kubelet[1918]: I0516 00:43:55.015449 1918 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:43:55.017804 kubelet[1918]: I0516 00:43:55.017775 1918 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:43:55.018114 kubelet[1918]: I0516 00:43:55.018087 1918 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:43:55.018368 kubelet[1918]: I0516 00:43:55.018182 1918 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:43:55.018509 kubelet[1918]: I0516 00:43:55.018494 1918 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:43:55.018567 kubelet[1918]: I0516 00:43:55.018558 1918 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:43:55.018662 kubelet[1918]: I0516 00:43:55.018650 1918 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:55.018897 kubelet[1918]: I0516 00:43:55.018880 1918 kubelet.go:446] "Attempting to sync node with API server" May 16 00:43:55.019011 kubelet[1918]: I0516 00:43:55.018976 1918 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:43:55.019085 kubelet[1918]: I0516 00:43:55.019074 1918 kubelet.go:352] "Adding apiserver pod source" May 16 00:43:55.019141 kubelet[1918]: I0516 00:43:55.019131 1918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:43:55.021293 kubelet[1918]: I0516 00:43:55.021272 1918 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:43:55.021868 kubelet[1918]: I0516 00:43:55.021845 1918 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:43:55.022348 kubelet[1918]: I0516 00:43:55.022326 1918 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:43:55.022460 kubelet[1918]: I0516 00:43:55.022449 1918 server.go:1287] "Started kubelet" May 16 00:43:55.022920 kubelet[1918]: I0516 00:43:55.022867 1918 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:43:55.023222 kubelet[1918]: I0516 00:43:55.023174 1918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:43:55.023515 kubelet[1918]: I0516 00:43:55.023493 1918 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:43:55.033759 kubelet[1918]: I0516 00:43:55.023781 1918 server.go:479] "Adding debug handlers to kubelet server" May 16 00:43:55.033759 kubelet[1918]: I0516 00:43:55.024744 1918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:43:55.039961 kubelet[1918]: I0516 00:43:55.024883 1918 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:43:55.040068 kubelet[1918]: I0516 00:43:55.040038 1918 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:43:55.040350 kubelet[1918]: E0516 00:43:55.040318 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:43:55.040553 kubelet[1918]: E0516 00:43:55.032050 1918 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:43:55.040860 kubelet[1918]: I0516 00:43:55.040832 1918 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:43:55.041006 kubelet[1918]: I0516 00:43:55.040986 1918 reconciler.go:26] "Reconciler: start to sync state" May 16 00:43:55.045828 kubelet[1918]: I0516 00:43:55.045781 1918 factory.go:221] Registration of the systemd container factory successfully May 16 00:43:55.046058 kubelet[1918]: I0516 00:43:55.046035 1918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:43:55.051493 kubelet[1918]: I0516 00:43:55.050668 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:43:55.052430 kubelet[1918]: I0516 00:43:55.052129 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:43:55.052430 kubelet[1918]: I0516 00:43:55.052178 1918 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:43:55.052430 kubelet[1918]: I0516 00:43:55.052200 1918 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:43:55.052430 kubelet[1918]: I0516 00:43:55.052207 1918 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:43:55.052580 kubelet[1918]: E0516 00:43:55.052453 1918 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:43:55.058268 kubelet[1918]: I0516 00:43:55.058239 1918 factory.go:221] Registration of the containerd container factory successfully May 16 00:43:55.087020 kubelet[1918]: I0516 00:43:55.086994 1918 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:43:55.087020 kubelet[1918]: I0516 00:43:55.087012 1918 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:43:55.087185 kubelet[1918]: I0516 00:43:55.087031 1918 state_mem.go:36] "Initialized new in-memory state store" May 16 00:43:55.087230 kubelet[1918]: I0516 00:43:55.087211 1918 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:43:55.087264 kubelet[1918]: I0516 00:43:55.087228 1918 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:43:55.087264 kubelet[1918]: I0516 00:43:55.087247 1918 policy_none.go:49] "None policy: Start" May 16 00:43:55.087264 kubelet[1918]: I0516 00:43:55.087256 1918 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:43:55.087264 kubelet[1918]: I0516 00:43:55.087265 1918 state_mem.go:35] "Initializing new in-memory state store" May 16 00:43:55.087372 kubelet[1918]: I0516 00:43:55.087362 1918 state_mem.go:75] "Updated machine memory state" May 16 00:43:55.091678 kubelet[1918]: I0516 00:43:55.091647 1918 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:43:55.091912 kubelet[1918]: I0516 00:43:55.091892 1918 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:43:55.092224 kubelet[1918]: I0516 00:43:55.092185 1918 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:43:55.092482 kubelet[1918]: I0516 00:43:55.092459 1918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:43:55.093376 kubelet[1918]: E0516 00:43:55.093350 1918 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:43:55.153480 kubelet[1918]: I0516 00:43:55.153385 1918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:43:55.153645 kubelet[1918]: I0516 00:43:55.153600 1918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:43:55.155415 kubelet[1918]: I0516 00:43:55.153865 1918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:43:55.160895 kubelet[1918]: E0516 00:43:55.160821 1918 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:43:55.161017 kubelet[1918]: E0516 00:43:55.160959 1918 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:43:55.161017 kubelet[1918]: E0516 00:43:55.161001 1918 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 00:43:55.196359 kubelet[1918]: I0516 00:43:55.196318 1918 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:43:55.202546 kubelet[1918]: I0516 00:43:55.202503 1918 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 00:43:55.202654 kubelet[1918]: I0516 00:43:55.202580 1918 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:43:55.242230 kubelet[1918]: I0516 00:43:55.242112 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:43:55.242230 kubelet[1918]: I0516 00:43:55.242159 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce1bf7f3a9233ba5a9856322139c3a8a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce1bf7f3a9233ba5a9856322139c3a8a\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:55.242230 kubelet[1918]: I0516 00:43:55.242180 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce1bf7f3a9233ba5a9856322139c3a8a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce1bf7f3a9233ba5a9856322139c3a8a\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:55.242230 kubelet[1918]: I0516 00:43:55.242197 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:55.242230 kubelet[1918]: I0516 00:43:55.242217 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:55.242451 kubelet[1918]: I0516 00:43:55.242232 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce1bf7f3a9233ba5a9856322139c3a8a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ce1bf7f3a9233ba5a9856322139c3a8a\") " pod="kube-system/kube-apiserver-localhost" May 16 00:43:55.242451 kubelet[1918]: I0516 00:43:55.242248 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:55.242451 kubelet[1918]: I0516 00:43:55.242265 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:55.242451 kubelet[1918]: I0516 00:43:55.242282 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:43:55.461493 kubelet[1918]: E0516 00:43:55.461455 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:55.461714 kubelet[1918]: E0516 00:43:55.461456 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:55.461885 kubelet[1918]: E0516 00:43:55.461867 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:55.647767 sudo[1953]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:43:55.648600 sudo[1953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 16 00:43:56.020284 kubelet[1918]: I0516 00:43:56.020243 1918 apiserver.go:52] "Watching apiserver" May 16 00:43:56.041799 kubelet[1918]: I0516 00:43:56.041745 1918 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:43:56.067332 kubelet[1918]: E0516 00:43:56.067288 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:56.067561 kubelet[1918]: E0516 00:43:56.067542 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:56.067749 kubelet[1918]: I0516 00:43:56.067735 1918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:43:56.077420 kubelet[1918]: E0516 00:43:56.077381 1918 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:43:56.077582 kubelet[1918]: E0516 00:43:56.077567 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:56.088498 sudo[1953]: pam_unix(sudo:session): session closed for user root May 16 00:43:56.115286 kubelet[1918]: I0516 00:43:56.115215 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.115196564 podStartE2EDuration="3.115196564s" podCreationTimestamp="2025-05-16 00:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:43:56.102139197 +0000 UTC m=+1.130977997" watchObservedRunningTime="2025-05-16 00:43:56.115196564 +0000 UTC m=+1.144035363" May 16 00:43:56.127041 kubelet[1918]: I0516 00:43:56.126683 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.126667242 podStartE2EDuration="3.126667242s" podCreationTimestamp="2025-05-16 00:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:43:56.115477102 +0000 UTC m=+1.144315901" watchObservedRunningTime="2025-05-16 00:43:56.126667242 +0000 UTC m=+1.155506041" May 16 00:43:56.138029 kubelet[1918]: I0516 00:43:56.137966 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.137952003 podStartE2EDuration="3.137952003s" podCreationTimestamp="2025-05-16 00:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:43:56.128005586 +0000 UTC m=+1.156844425" watchObservedRunningTime="2025-05-16 00:43:56.137952003 +0000 UTC m=+1.166790802" May 16 00:43:57.068451 kubelet[1918]: E0516 00:43:57.068407 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:57.068829 kubelet[1918]: E0516 00:43:57.068501 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:57.974400 sudo[1318]: pam_unix(sudo:session): session closed for user root May 16 00:43:57.976311 sshd[1315]: pam_unix(sshd:session): session closed for user core May 16 00:43:57.980561 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:34046.service: Deactivated successfully. May 16 00:43:57.981753 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:43:57.981956 systemd[1]: session-5.scope: Consumed 8.605s CPU time. May 16 00:43:57.982741 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. May 16 00:43:57.983490 systemd-logind[1203]: Removed session 5. May 16 00:43:58.179133 kubelet[1918]: E0516 00:43:58.179091 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:43:59.819072 kubelet[1918]: I0516 00:43:59.819033 1918 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:43:59.819424 env[1215]: time="2025-05-16T00:43:59.819348311Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:43:59.819605 kubelet[1918]: I0516 00:43:59.819528 1918 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:44:00.635354 systemd[1]: Created slice kubepods-besteffort-pod53a7a412_3a83_42d2_b125_99d836e66fc9.slice. May 16 00:44:00.649981 systemd[1]: Created slice kubepods-burstable-pod92aacb67_c782_4d58_a9f3_472898597620.slice. May 16 00:44:00.685604 kubelet[1918]: I0516 00:44:00.685566 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-kernel\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.685859 kubelet[1918]: I0516 00:44:00.685840 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbwr9\" (UniqueName: \"kubernetes.io/projected/53a7a412-3a83-42d2-b125-99d836e66fc9-kube-api-access-dbwr9\") pod \"kube-proxy-jmghv\" (UID: \"53a7a412-3a83-42d2-b125-99d836e66fc9\") " pod="kube-system/kube-proxy-jmghv" May 16 00:44:00.685975 kubelet[1918]: I0516 00:44:00.685961 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92aacb67-c782-4d58-a9f3-472898597620-cilium-config-path\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686069 kubelet[1918]: I0516 00:44:00.686055 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-xtables-lock\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686169 kubelet[1918]: I0516 00:44:00.686156 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-lib-modules\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686260 kubelet[1918]: I0516 00:44:00.686247 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92aacb67-c782-4d58-a9f3-472898597620-clustermesh-secrets\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686360 kubelet[1918]: I0516 00:44:00.686346 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cni-path\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686493 kubelet[1918]: I0516 00:44:00.686452 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8j9w\" (UniqueName: \"kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-kube-api-access-t8j9w\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686535 kubelet[1918]: I0516 00:44:00.686510 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53a7a412-3a83-42d2-b125-99d836e66fc9-xtables-lock\") pod \"kube-proxy-jmghv\" (UID: \"53a7a412-3a83-42d2-b125-99d836e66fc9\") " pod="kube-system/kube-proxy-jmghv" May 16 00:44:00.686563 kubelet[1918]: I0516 00:44:00.686533 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53a7a412-3a83-42d2-b125-99d836e66fc9-lib-modules\") pod \"kube-proxy-jmghv\" (UID: \"53a7a412-3a83-42d2-b125-99d836e66fc9\") " pod="kube-system/kube-proxy-jmghv" May 16 00:44:00.686563 kubelet[1918]: I0516 00:44:00.686553 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-hostproc\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686609 kubelet[1918]: I0516 00:44:00.686574 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-hubble-tls\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686609 kubelet[1918]: I0516 00:44:00.686593 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-bpf-maps\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686683 kubelet[1918]: I0516 00:44:00.686617 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53a7a412-3a83-42d2-b125-99d836e66fc9-kube-proxy\") pod \"kube-proxy-jmghv\" (UID: \"53a7a412-3a83-42d2-b125-99d836e66fc9\") " pod="kube-system/kube-proxy-jmghv" May 16 00:44:00.686683 kubelet[1918]: I0516 00:44:00.686631 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-run\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686683 kubelet[1918]: I0516 00:44:00.686647 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-cgroup\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686683 kubelet[1918]: I0516 00:44:00.686672 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-net\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.686774 kubelet[1918]: I0516 00:44:00.686686 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-etc-cni-netd\") pod \"cilium-hjw5c\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " pod="kube-system/cilium-hjw5c" May 16 00:44:00.788112 kubelet[1918]: I0516 00:44:00.788067 1918 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:44:00.832139 systemd[1]: Created slice kubepods-besteffort-poda171ae39_e414_4f98_815b_5bfb1e604716.slice. May 16 00:44:00.888682 kubelet[1918]: I0516 00:44:00.888560 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl62g\" (UniqueName: \"kubernetes.io/projected/a171ae39-e414-4f98-815b-5bfb1e604716-kube-api-access-dl62g\") pod \"cilium-operator-6c4d7847fc-h2snn\" (UID: \"a171ae39-e414-4f98-815b-5bfb1e604716\") " pod="kube-system/cilium-operator-6c4d7847fc-h2snn" May 16 00:44:00.889096 kubelet[1918]: I0516 00:44:00.889074 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a171ae39-e414-4f98-815b-5bfb1e604716-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h2snn\" (UID: \"a171ae39-e414-4f98-815b-5bfb1e604716\") " pod="kube-system/cilium-operator-6c4d7847fc-h2snn" May 16 00:44:00.944262 kubelet[1918]: E0516 00:44:00.944215 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:00.944737 env[1215]: time="2025-05-16T00:44:00.944694375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmghv,Uid:53a7a412-3a83-42d2-b125-99d836e66fc9,Namespace:kube-system,Attempt:0,}" May 16 00:44:00.952386 kubelet[1918]: E0516 00:44:00.952340 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:00.953048 env[1215]: time="2025-05-16T00:44:00.952738962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjw5c,Uid:92aacb67-c782-4d58-a9f3-472898597620,Namespace:kube-system,Attempt:0,}" May 16 00:44:00.961491 env[1215]: time="2025-05-16T00:44:00.961408719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:00.961491 env[1215]: time="2025-05-16T00:44:00.961460402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:00.961491 env[1215]: time="2025-05-16T00:44:00.961471172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:00.961726 env[1215]: time="2025-05-16T00:44:00.961698004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b098b2a13cd50049a2586c273b0be0db478fe1d32eb835dc2e95d75dc70e9d64 pid=2009 runtime=io.containerd.runc.v2 May 16 00:44:00.965385 env[1215]: time="2025-05-16T00:44:00.965283447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:00.965385 env[1215]: time="2025-05-16T00:44:00.965335211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:00.965385 env[1215]: time="2025-05-16T00:44:00.965349703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:00.965524 env[1215]: time="2025-05-16T00:44:00.965496147Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e pid=2027 runtime=io.containerd.runc.v2 May 16 00:44:00.972022 systemd[1]: Started cri-containerd-b098b2a13cd50049a2586c273b0be0db478fe1d32eb835dc2e95d75dc70e9d64.scope. May 16 00:44:00.977843 systemd[1]: Started cri-containerd-107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e.scope. May 16 00:44:01.019694 env[1215]: time="2025-05-16T00:44:01.019646238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmghv,Uid:53a7a412-3a83-42d2-b125-99d836e66fc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b098b2a13cd50049a2586c273b0be0db478fe1d32eb835dc2e95d75dc70e9d64\"" May 16 00:44:01.020695 kubelet[1918]: E0516 00:44:01.020671 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:01.022711 env[1215]: time="2025-05-16T00:44:01.022677512Z" level=info msg="CreateContainer within sandbox \"b098b2a13cd50049a2586c273b0be0db478fe1d32eb835dc2e95d75dc70e9d64\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:44:01.025845 env[1215]: time="2025-05-16T00:44:01.025813630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjw5c,Uid:92aacb67-c782-4d58-a9f3-472898597620,Namespace:kube-system,Attempt:0,} returns sandbox id \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\"" May 16 00:44:01.027260 kubelet[1918]: E0516 00:44:01.027030 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:01.029617 env[1215]: time="2025-05-16T00:44:01.029460158Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:44:01.035817 env[1215]: time="2025-05-16T00:44:01.035765021Z" level=info msg="CreateContainer within sandbox \"b098b2a13cd50049a2586c273b0be0db478fe1d32eb835dc2e95d75dc70e9d64\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"617e02132d6f39dbc527d8fd4df7e2ba46cac65dc119464754f8caca77552c33\"" May 16 00:44:01.036407 env[1215]: time="2025-05-16T00:44:01.036239882Z" level=info msg="StartContainer for \"617e02132d6f39dbc527d8fd4df7e2ba46cac65dc119464754f8caca77552c33\"" May 16 00:44:01.052491 systemd[1]: Started cri-containerd-617e02132d6f39dbc527d8fd4df7e2ba46cac65dc119464754f8caca77552c33.scope. May 16 00:44:01.087886 env[1215]: time="2025-05-16T00:44:01.087833029Z" level=info msg="StartContainer for \"617e02132d6f39dbc527d8fd4df7e2ba46cac65dc119464754f8caca77552c33\" returns successfully" May 16 00:44:01.134814 kubelet[1918]: E0516 00:44:01.134767 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:01.136334 env[1215]: time="2025-05-16T00:44:01.135553467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h2snn,Uid:a171ae39-e414-4f98-815b-5bfb1e604716,Namespace:kube-system,Attempt:0,}" May 16 00:44:01.152563 env[1215]: time="2025-05-16T00:44:01.152422332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:01.152563 env[1215]: time="2025-05-16T00:44:01.152461523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:01.152563 env[1215]: time="2025-05-16T00:44:01.152472212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:01.153545 env[1215]: time="2025-05-16T00:44:01.153484024Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed pid=2125 runtime=io.containerd.runc.v2 May 16 00:44:01.166700 systemd[1]: Started cri-containerd-81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed.scope. May 16 00:44:01.208272 env[1215]: time="2025-05-16T00:44:01.208226380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h2snn,Uid:a171ae39-e414-4f98-815b-5bfb1e604716,Namespace:kube-system,Attempt:0,} returns sandbox id \"81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed\"" May 16 00:44:01.209072 kubelet[1918]: E0516 00:44:01.209050 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:02.082818 kubelet[1918]: E0516 00:44:02.082085 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:02.091937 kubelet[1918]: I0516 00:44:02.091886 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jmghv" podStartSLOduration=2.091869259 podStartE2EDuration="2.091869259s" podCreationTimestamp="2025-05-16 00:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:44:02.091773746 +0000 UTC m=+7.120612545" watchObservedRunningTime="2025-05-16 00:44:02.091869259 +0000 UTC m=+7.120708058" May 16 00:44:03.089104 kubelet[1918]: E0516 00:44:03.088127 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:03.617870 kubelet[1918]: E0516 00:44:03.613171 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:04.089022 kubelet[1918]: E0516 00:44:04.088980 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:04.754577 kubelet[1918]: E0516 00:44:04.754256 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:05.011248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685736750.mount: Deactivated successfully. May 16 00:44:05.091557 kubelet[1918]: E0516 00:44:05.090701 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:05.091557 kubelet[1918]: E0516 00:44:05.091237 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:06.093008 kubelet[1918]: E0516 00:44:06.092976 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:07.515324 env[1215]: time="2025-05-16T00:44:07.515269316Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:07.527411 env[1215]: time="2025-05-16T00:44:07.527365611Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:07.534982 env[1215]: time="2025-05-16T00:44:07.534937707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:07.535848 env[1215]: time="2025-05-16T00:44:07.535809696Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 00:44:07.543400 env[1215]: time="2025-05-16T00:44:07.543354936Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:44:07.549052 env[1215]: time="2025-05-16T00:44:07.549005152Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:44:07.564261 env[1215]: time="2025-05-16T00:44:07.564201055Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\"" May 16 00:44:07.564862 env[1215]: time="2025-05-16T00:44:07.564833784Z" level=info msg="StartContainer for \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\"" May 16 00:44:07.587752 systemd[1]: Started cri-containerd-4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98.scope. May 16 00:44:07.651879 env[1215]: time="2025-05-16T00:44:07.651531148Z" level=info msg="StartContainer for \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\" returns successfully" May 16 00:44:07.678390 systemd[1]: cri-containerd-4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98.scope: Deactivated successfully. May 16 00:44:07.767811 env[1215]: time="2025-05-16T00:44:07.767680010Z" level=info msg="shim disconnected" id=4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98 May 16 00:44:07.768026 env[1215]: time="2025-05-16T00:44:07.768004680Z" level=warning msg="cleaning up after shim disconnected" id=4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98 namespace=k8s.io May 16 00:44:07.768098 env[1215]: time="2025-05-16T00:44:07.768083886Z" level=info msg="cleaning up dead shim" May 16 00:44:07.775435 env[1215]: time="2025-05-16T00:44:07.775388066Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2345 runtime=io.containerd.runc.v2\n" May 16 00:44:08.110928 kubelet[1918]: E0516 00:44:08.110826 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:08.114648 env[1215]: time="2025-05-16T00:44:08.114169262Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:44:08.128414 env[1215]: time="2025-05-16T00:44:08.128330910Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\"" May 16 00:44:08.128867 env[1215]: time="2025-05-16T00:44:08.128839872Z" level=info msg="StartContainer for \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\"" May 16 00:44:08.145853 systemd[1]: Started cri-containerd-1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3.scope. May 16 00:44:08.185274 env[1215]: time="2025-05-16T00:44:08.185223439Z" level=info msg="StartContainer for \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\" returns successfully" May 16 00:44:08.189891 kubelet[1918]: E0516 00:44:08.189861 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:08.198573 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:44:08.198910 systemd[1]: Stopped systemd-sysctl.service. May 16 00:44:08.199077 systemd[1]: Stopping systemd-sysctl.service... May 16 00:44:08.200604 systemd[1]: Starting systemd-sysctl.service... May 16 00:44:08.204193 systemd[1]: cri-containerd-1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3.scope: Deactivated successfully. May 16 00:44:08.212541 systemd[1]: Finished systemd-sysctl.service. May 16 00:44:08.235217 env[1215]: time="2025-05-16T00:44:08.235153589Z" level=info msg="shim disconnected" id=1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3 May 16 00:44:08.235217 env[1215]: time="2025-05-16T00:44:08.235203017Z" level=warning msg="cleaning up after shim disconnected" id=1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3 namespace=k8s.io May 16 00:44:08.235217 env[1215]: time="2025-05-16T00:44:08.235212942Z" level=info msg="cleaning up dead shim" May 16 00:44:08.243046 env[1215]: time="2025-05-16T00:44:08.242995055Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2409 runtime=io.containerd.runc.v2\n" May 16 00:44:08.560486 systemd[1]: run-containerd-runc-k8s.io-4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98-runc.08ZhgG.mount: Deactivated successfully. May 16 00:44:08.560584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98-rootfs.mount: Deactivated successfully. May 16 00:44:08.959771 update_engine[1207]: I0516 00:44:08.959702 1207 update_attempter.cc:509] Updating boot flags... May 16 00:44:09.114320 kubelet[1918]: E0516 00:44:09.114222 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:09.116817 env[1215]: time="2025-05-16T00:44:09.116728069Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:44:09.149713 env[1215]: time="2025-05-16T00:44:09.149661383Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\"" May 16 00:44:09.151532 env[1215]: time="2025-05-16T00:44:09.150362473Z" level=info msg="StartContainer for \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\"" May 16 00:44:09.169485 systemd[1]: Started cri-containerd-5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e.scope. May 16 00:44:09.204959 env[1215]: time="2025-05-16T00:44:09.204906735Z" level=info msg="StartContainer for \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\" returns successfully" May 16 00:44:09.213992 systemd[1]: cri-containerd-5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e.scope: Deactivated successfully. May 16 00:44:09.236875 env[1215]: time="2025-05-16T00:44:09.236786614Z" level=info msg="shim disconnected" id=5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e May 16 00:44:09.236875 env[1215]: time="2025-05-16T00:44:09.236865336Z" level=warning msg="cleaning up after shim disconnected" id=5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e namespace=k8s.io May 16 00:44:09.236875 env[1215]: time="2025-05-16T00:44:09.236874861Z" level=info msg="cleaning up dead shim" May 16 00:44:09.244916 env[1215]: time="2025-05-16T00:44:09.244866152Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2479 runtime=io.containerd.runc.v2\n" May 16 00:44:09.560089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e-rootfs.mount: Deactivated successfully. May 16 00:44:09.965214 env[1215]: time="2025-05-16T00:44:09.965170879Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:09.966549 env[1215]: time="2025-05-16T00:44:09.966506102Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:09.967880 env[1215]: time="2025-05-16T00:44:09.967844848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:09.968361 env[1215]: time="2025-05-16T00:44:09.968332785Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 00:44:09.971098 env[1215]: time="2025-05-16T00:44:09.970731049Z" level=info msg="CreateContainer within sandbox \"81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:44:09.980988 env[1215]: time="2025-05-16T00:44:09.980938267Z" level=info msg="CreateContainer within sandbox \"81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\"" May 16 00:44:09.981706 env[1215]: time="2025-05-16T00:44:09.981669493Z" level=info msg="StartContainer for \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\"" May 16 00:44:10.007388 systemd[1]: Started cri-containerd-13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5.scope. May 16 00:44:10.046811 env[1215]: time="2025-05-16T00:44:10.046750920Z" level=info msg="StartContainer for \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\" returns successfully" May 16 00:44:10.118235 kubelet[1918]: E0516 00:44:10.118198 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:10.120857 env[1215]: time="2025-05-16T00:44:10.120324411Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:44:10.121279 kubelet[1918]: E0516 00:44:10.121239 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:10.135705 env[1215]: time="2025-05-16T00:44:10.135638490Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\"" May 16 00:44:10.136271 env[1215]: time="2025-05-16T00:44:10.136235429Z" level=info msg="StartContainer for \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\"" May 16 00:44:10.154862 systemd[1]: Started cri-containerd-2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6.scope. May 16 00:44:10.170498 kubelet[1918]: I0516 00:44:10.168963 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h2snn" podStartSLOduration=1.409497622 podStartE2EDuration="10.168945071s" podCreationTimestamp="2025-05-16 00:44:00 +0000 UTC" firstStartedPulling="2025-05-16 00:44:01.209848163 +0000 UTC m=+6.238686962" lastFinishedPulling="2025-05-16 00:44:09.969295652 +0000 UTC m=+14.998134411" observedRunningTime="2025-05-16 00:44:10.168695105 +0000 UTC m=+15.197533904" watchObservedRunningTime="2025-05-16 00:44:10.168945071 +0000 UTC m=+15.197783870" May 16 00:44:10.197162 systemd[1]: cri-containerd-2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6.scope: Deactivated successfully. May 16 00:44:10.205376 env[1215]: time="2025-05-16T00:44:10.205330475Z" level=info msg="StartContainer for \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\" returns successfully" May 16 00:44:10.398149 env[1215]: time="2025-05-16T00:44:10.398040624Z" level=info msg="shim disconnected" id=2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6 May 16 00:44:10.398149 env[1215]: time="2025-05-16T00:44:10.398085526Z" level=warning msg="cleaning up after shim disconnected" id=2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6 namespace=k8s.io May 16 00:44:10.398149 env[1215]: time="2025-05-16T00:44:10.398096172Z" level=info msg="cleaning up dead shim" May 16 00:44:10.405175 env[1215]: time="2025-05-16T00:44:10.405131619Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2574 runtime=io.containerd.runc.v2\n" May 16 00:44:11.125483 kubelet[1918]: E0516 00:44:11.125449 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:11.126106 kubelet[1918]: E0516 00:44:11.125928 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:11.127631 env[1215]: time="2025-05-16T00:44:11.127596803Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:44:11.141501 env[1215]: time="2025-05-16T00:44:11.141447737Z" level=info msg="CreateContainer within sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\"" May 16 00:44:11.141943 env[1215]: time="2025-05-16T00:44:11.141913159Z" level=info msg="StartContainer for \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\"" May 16 00:44:11.157669 systemd[1]: Started cri-containerd-c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45.scope. May 16 00:44:11.213362 env[1215]: time="2025-05-16T00:44:11.213319014Z" level=info msg="StartContainer for \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\" returns successfully" May 16 00:44:11.336099 kubelet[1918]: I0516 00:44:11.336073 1918 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:44:11.373612 systemd[1]: Created slice kubepods-burstable-pod1b34a018_e138_4eb8_be02_2a96e5983d4f.slice. May 16 00:44:11.380054 systemd[1]: Created slice kubepods-burstable-pod33ac0bf2_d8bc_4382_b25c_98b67bb5656f.slice. May 16 00:44:11.543769 kubelet[1918]: I0516 00:44:11.543721 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b34a018-e138-4eb8-be02-2a96e5983d4f-config-volume\") pod \"coredns-668d6bf9bc-wl9c8\" (UID: \"1b34a018-e138-4eb8-be02-2a96e5983d4f\") " pod="kube-system/coredns-668d6bf9bc-wl9c8" May 16 00:44:11.543769 kubelet[1918]: I0516 00:44:11.543768 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33ac0bf2-d8bc-4382-b25c-98b67bb5656f-config-volume\") pod \"coredns-668d6bf9bc-xgvfh\" (UID: \"33ac0bf2-d8bc-4382-b25c-98b67bb5656f\") " pod="kube-system/coredns-668d6bf9bc-xgvfh" May 16 00:44:11.543956 kubelet[1918]: I0516 00:44:11.543790 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph42k\" (UniqueName: \"kubernetes.io/projected/33ac0bf2-d8bc-4382-b25c-98b67bb5656f-kube-api-access-ph42k\") pod \"coredns-668d6bf9bc-xgvfh\" (UID: \"33ac0bf2-d8bc-4382-b25c-98b67bb5656f\") " pod="kube-system/coredns-668d6bf9bc-xgvfh" May 16 00:44:11.543956 kubelet[1918]: I0516 00:44:11.543826 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r65cq\" (UniqueName: \"kubernetes.io/projected/1b34a018-e138-4eb8-be02-2a96e5983d4f-kube-api-access-r65cq\") pod \"coredns-668d6bf9bc-wl9c8\" (UID: \"1b34a018-e138-4eb8-be02-2a96e5983d4f\") " pod="kube-system/coredns-668d6bf9bc-wl9c8" May 16 00:44:11.565829 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:44:11.677689 kubelet[1918]: E0516 00:44:11.677591 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:11.678606 env[1215]: time="2025-05-16T00:44:11.678565601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl9c8,Uid:1b34a018-e138-4eb8-be02-2a96e5983d4f,Namespace:kube-system,Attempt:0,}" May 16 00:44:11.684689 kubelet[1918]: E0516 00:44:11.684657 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:11.685395 env[1215]: time="2025-05-16T00:44:11.685354603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgvfh,Uid:33ac0bf2-d8bc-4382-b25c-98b67bb5656f,Namespace:kube-system,Attempt:0,}" May 16 00:44:11.876868 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:44:12.130100 kubelet[1918]: E0516 00:44:12.130074 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:12.147550 kubelet[1918]: I0516 00:44:12.147493 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hjw5c" podStartSLOduration=5.632742804 podStartE2EDuration="12.147474307s" podCreationTimestamp="2025-05-16 00:44:00 +0000 UTC" firstStartedPulling="2025-05-16 00:44:01.028382533 +0000 UTC m=+6.057221332" lastFinishedPulling="2025-05-16 00:44:07.543114036 +0000 UTC m=+12.571952835" observedRunningTime="2025-05-16 00:44:12.147368899 +0000 UTC m=+17.176207698" watchObservedRunningTime="2025-05-16 00:44:12.147474307 +0000 UTC m=+17.176313106" May 16 00:44:13.131615 kubelet[1918]: E0516 00:44:13.131583 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:13.498333 systemd-networkd[1045]: cilium_host: Link UP May 16 00:44:13.502976 systemd-networkd[1045]: cilium_net: Link UP May 16 00:44:13.504591 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 16 00:44:13.504677 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 16 00:44:13.508252 systemd-networkd[1045]: cilium_net: Gained carrier May 16 00:44:13.508473 systemd-networkd[1045]: cilium_host: Gained carrier May 16 00:44:13.587951 systemd-networkd[1045]: cilium_vxlan: Link UP May 16 00:44:13.587958 systemd-networkd[1045]: cilium_vxlan: Gained carrier May 16 00:44:13.797988 systemd-networkd[1045]: cilium_net: Gained IPv6LL May 16 00:44:13.915834 kernel: NET: Registered PF_ALG protocol family May 16 00:44:14.134409 kubelet[1918]: E0516 00:44:14.134316 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:14.405909 systemd-networkd[1045]: cilium_host: Gained IPv6LL May 16 00:44:14.493671 systemd-networkd[1045]: lxc_health: Link UP May 16 00:44:14.502834 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:44:14.502855 systemd-networkd[1045]: lxc_health: Gained carrier May 16 00:44:14.763054 systemd-networkd[1045]: lxc010671ac63d4: Link UP May 16 00:44:14.768840 kernel: eth0: renamed from tmpdc14e May 16 00:44:14.774883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc010671ac63d4: link becomes ready May 16 00:44:14.774736 systemd-networkd[1045]: lxc010671ac63d4: Gained carrier May 16 00:44:14.776315 systemd-networkd[1045]: lxc3fd38fdfbd13: Link UP May 16 00:44:14.782823 kernel: eth0: renamed from tmp8c347 May 16 00:44:14.790824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3fd38fdfbd13: link becomes ready May 16 00:44:14.790873 systemd-networkd[1045]: lxc3fd38fdfbd13: Gained carrier May 16 00:44:14.853937 systemd-networkd[1045]: cilium_vxlan: Gained IPv6LL May 16 00:44:15.135901 kubelet[1918]: E0516 00:44:15.135773 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:15.877965 systemd-networkd[1045]: lxc_health: Gained IPv6LL May 16 00:44:16.137467 kubelet[1918]: E0516 00:44:16.137361 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:16.262941 systemd-networkd[1045]: lxc010671ac63d4: Gained IPv6LL May 16 00:44:16.390963 systemd-networkd[1045]: lxc3fd38fdfbd13: Gained IPv6LL May 16 00:44:17.141396 kubelet[1918]: E0516 00:44:17.141350 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:18.364323 env[1215]: time="2025-05-16T00:44:18.364254344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:18.364323 env[1215]: time="2025-05-16T00:44:18.364295559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:18.364323 env[1215]: time="2025-05-16T00:44:18.364306322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:18.364675 env[1215]: time="2025-05-16T00:44:18.364422523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc14e69d3a5cc9b6b05f01b4c7a85dc12f47a66d6129b734933b40c99bd6cd6f pid=3136 runtime=io.containerd.runc.v2 May 16 00:44:18.366773 env[1215]: time="2025-05-16T00:44:18.366713117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:18.366773 env[1215]: time="2025-05-16T00:44:18.366755172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:18.370908 env[1215]: time="2025-05-16T00:44:18.366924431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:18.370908 env[1215]: time="2025-05-16T00:44:18.367500270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c34751b89102be446d39f6109447a3d033241a3164d7042ab32fa3fd52265a2 pid=3144 runtime=io.containerd.runc.v2 May 16 00:44:18.380471 systemd[1]: Started cri-containerd-dc14e69d3a5cc9b6b05f01b4c7a85dc12f47a66d6129b734933b40c99bd6cd6f.scope. May 16 00:44:18.384271 systemd[1]: run-containerd-runc-k8s.io-dc14e69d3a5cc9b6b05f01b4c7a85dc12f47a66d6129b734933b40c99bd6cd6f-runc.Y7kXSP.mount: Deactivated successfully. May 16 00:44:18.386611 systemd[1]: Started cri-containerd-8c34751b89102be446d39f6109447a3d033241a3164d7042ab32fa3fd52265a2.scope. May 16 00:44:18.454220 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:18.458207 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:18.473405 env[1215]: time="2025-05-16T00:44:18.473342231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xgvfh,Uid:33ac0bf2-d8bc-4382-b25c-98b67bb5656f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc14e69d3a5cc9b6b05f01b4c7a85dc12f47a66d6129b734933b40c99bd6cd6f\"" May 16 00:44:18.478823 kubelet[1918]: E0516 00:44:18.478518 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:18.483111 env[1215]: time="2025-05-16T00:44:18.483069886Z" level=info msg="CreateContainer within sandbox \"dc14e69d3a5cc9b6b05f01b4c7a85dc12f47a66d6129b734933b40c99bd6cd6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:44:18.484025 env[1215]: time="2025-05-16T00:44:18.483967518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wl9c8,Uid:1b34a018-e138-4eb8-be02-2a96e5983d4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c34751b89102be446d39f6109447a3d033241a3164d7042ab32fa3fd52265a2\"" May 16 00:44:18.484653 kubelet[1918]: E0516 00:44:18.484631 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:18.486176 env[1215]: time="2025-05-16T00:44:18.486148594Z" level=info msg="CreateContainer within sandbox \"8c34751b89102be446d39f6109447a3d033241a3164d7042ab32fa3fd52265a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:44:18.497903 env[1215]: time="2025-05-16T00:44:18.497858457Z" level=info msg="CreateContainer within sandbox \"dc14e69d3a5cc9b6b05f01b4c7a85dc12f47a66d6129b734933b40c99bd6cd6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb3e863c74fda5f880fc0872578a13bd469ac2c44951a36e2260bc4e29bd1397\"" May 16 00:44:18.498511 env[1215]: time="2025-05-16T00:44:18.498485434Z" level=info msg="StartContainer for \"bb3e863c74fda5f880fc0872578a13bd469ac2c44951a36e2260bc4e29bd1397\"" May 16 00:44:18.502480 env[1215]: time="2025-05-16T00:44:18.502433844Z" level=info msg="CreateContainer within sandbox \"8c34751b89102be446d39f6109447a3d033241a3164d7042ab32fa3fd52265a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89a2692a3588e19ceac4de7698c0df2bd9a3145e107ea33545ef7602c2e9fdbe\"" May 16 00:44:18.503026 env[1215]: time="2025-05-16T00:44:18.502862153Z" level=info msg="StartContainer for \"89a2692a3588e19ceac4de7698c0df2bd9a3145e107ea33545ef7602c2e9fdbe\"" May 16 00:44:18.517594 systemd[1]: Started cri-containerd-89a2692a3588e19ceac4de7698c0df2bd9a3145e107ea33545ef7602c2e9fdbe.scope. May 16 00:44:18.518399 systemd[1]: Started cri-containerd-bb3e863c74fda5f880fc0872578a13bd469ac2c44951a36e2260bc4e29bd1397.scope. May 16 00:44:18.562891 env[1215]: time="2025-05-16T00:44:18.562786023Z" level=info msg="StartContainer for \"bb3e863c74fda5f880fc0872578a13bd469ac2c44951a36e2260bc4e29bd1397\" returns successfully" May 16 00:44:18.563414 env[1215]: time="2025-05-16T00:44:18.563384271Z" level=info msg="StartContainer for \"89a2692a3588e19ceac4de7698c0df2bd9a3145e107ea33545ef7602c2e9fdbe\" returns successfully" May 16 00:44:19.142889 kubelet[1918]: E0516 00:44:19.142847 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:19.146425 kubelet[1918]: E0516 00:44:19.145987 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:19.153853 kubelet[1918]: I0516 00:44:19.153785 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xgvfh" podStartSLOduration=19.153768235 podStartE2EDuration="19.153768235s" podCreationTimestamp="2025-05-16 00:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:44:19.153465855 +0000 UTC m=+24.182304654" watchObservedRunningTime="2025-05-16 00:44:19.153768235 +0000 UTC m=+24.182607034" May 16 00:44:19.175366 kubelet[1918]: I0516 00:44:19.175289 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wl9c8" podStartSLOduration=19.175270628 podStartE2EDuration="19.175270628s" podCreationTimestamp="2025-05-16 00:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:44:19.174575477 +0000 UTC m=+24.203414276" watchObservedRunningTime="2025-05-16 00:44:19.175270628 +0000 UTC m=+24.204109427" May 16 00:44:20.147914 kubelet[1918]: E0516 00:44:20.147879 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:20.148233 kubelet[1918]: E0516 00:44:20.147948 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:21.149327 kubelet[1918]: E0516 00:44:21.149233 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:21.149662 kubelet[1918]: E0516 00:44:21.149344 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:21.407125 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:38472.service. May 16 00:44:21.446178 sshd[3296]: Accepted publickey for core from 10.0.0.1 port 38472 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:21.448024 sshd[3296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:21.452466 systemd-logind[1203]: New session 6 of user core. May 16 00:44:21.452932 systemd[1]: Started session-6.scope. May 16 00:44:21.569524 sshd[3296]: pam_unix(sshd:session): session closed for user core May 16 00:44:21.571852 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:38472.service: Deactivated successfully. May 16 00:44:21.572549 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:44:21.573080 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. May 16 00:44:21.573857 systemd-logind[1203]: Removed session 6. May 16 00:44:26.576114 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:47414.service. May 16 00:44:26.612808 sshd[3311]: Accepted publickey for core from 10.0.0.1 port 47414 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:26.614291 sshd[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:26.617943 systemd-logind[1203]: New session 7 of user core. May 16 00:44:26.618435 systemd[1]: Started session-7.scope. May 16 00:44:26.728029 sshd[3311]: pam_unix(sshd:session): session closed for user core May 16 00:44:26.730133 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:47414.service: Deactivated successfully. May 16 00:44:26.730887 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:44:26.731417 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. May 16 00:44:26.732195 systemd-logind[1203]: Removed session 7. May 16 00:44:31.734849 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:47422.service. May 16 00:44:31.776879 sshd[3329]: Accepted publickey for core from 10.0.0.1 port 47422 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:31.778485 sshd[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:31.781887 systemd-logind[1203]: New session 8 of user core. May 16 00:44:31.782359 systemd[1]: Started session-8.scope. May 16 00:44:31.891112 sshd[3329]: pam_unix(sshd:session): session closed for user core May 16 00:44:31.893518 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:47422.service: Deactivated successfully. May 16 00:44:31.894236 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:44:31.894710 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. May 16 00:44:31.895418 systemd-logind[1203]: Removed session 8. May 16 00:44:36.896691 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:45190.service. May 16 00:44:36.933121 sshd[3344]: Accepted publickey for core from 10.0.0.1 port 45190 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:36.934512 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:36.938050 systemd-logind[1203]: New session 9 of user core. May 16 00:44:36.938488 systemd[1]: Started session-9.scope. May 16 00:44:37.051843 sshd[3344]: pam_unix(sshd:session): session closed for user core May 16 00:44:37.055012 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:45200.service. May 16 00:44:37.055896 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:45190.service: Deactivated successfully. May 16 00:44:37.056844 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:44:37.057903 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. May 16 00:44:37.058642 systemd-logind[1203]: Removed session 9. May 16 00:44:37.098400 sshd[3358]: Accepted publickey for core from 10.0.0.1 port 45200 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:37.100277 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:37.104842 systemd-logind[1203]: New session 10 of user core. May 16 00:44:37.105200 systemd[1]: Started session-10.scope. May 16 00:44:37.258193 sshd[3358]: pam_unix(sshd:session): session closed for user core May 16 00:44:37.262092 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:45216.service. May 16 00:44:37.264936 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. May 16 00:44:37.265916 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:44:37.266942 systemd-logind[1203]: Removed session 10. May 16 00:44:37.267369 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:45200.service: Deactivated successfully. May 16 00:44:37.300699 sshd[3370]: Accepted publickey for core from 10.0.0.1 port 45216 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:37.302012 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:37.306080 systemd[1]: Started session-11.scope. May 16 00:44:37.306091 systemd-logind[1203]: New session 11 of user core. May 16 00:44:37.420949 sshd[3370]: pam_unix(sshd:session): session closed for user core May 16 00:44:37.423557 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:45216.service: Deactivated successfully. May 16 00:44:37.424256 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:44:37.424732 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. May 16 00:44:37.425473 systemd-logind[1203]: Removed session 11. May 16 00:44:42.426933 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:45228.service. May 16 00:44:42.463854 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 45228 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:42.465376 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:42.468576 systemd-logind[1203]: New session 12 of user core. May 16 00:44:42.469428 systemd[1]: Started session-12.scope. May 16 00:44:42.592620 sshd[3387]: pam_unix(sshd:session): session closed for user core May 16 00:44:42.595468 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:45228.service: Deactivated successfully. May 16 00:44:42.596218 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:44:42.596702 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. May 16 00:44:42.597345 systemd-logind[1203]: Removed session 12. May 16 00:44:47.599689 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:34094.service. May 16 00:44:47.638941 sshd[3401]: Accepted publickey for core from 10.0.0.1 port 34094 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:47.640716 sshd[3401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:47.644601 systemd-logind[1203]: New session 13 of user core. May 16 00:44:47.644919 systemd[1]: Started session-13.scope. May 16 00:44:47.754333 sshd[3401]: pam_unix(sshd:session): session closed for user core May 16 00:44:47.757137 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:34094.service: Deactivated successfully. May 16 00:44:47.757771 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:44:47.758354 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. May 16 00:44:47.759719 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:34102.service. May 16 00:44:47.760500 systemd-logind[1203]: Removed session 13. May 16 00:44:47.799664 sshd[3414]: Accepted publickey for core from 10.0.0.1 port 34102 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:47.801572 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:47.805005 systemd-logind[1203]: New session 14 of user core. May 16 00:44:47.805988 systemd[1]: Started session-14.scope. May 16 00:44:48.010698 sshd[3414]: pam_unix(sshd:session): session closed for user core May 16 00:44:48.014154 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:34110.service. May 16 00:44:48.014665 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:34102.service: Deactivated successfully. May 16 00:44:48.015367 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:44:48.016049 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. May 16 00:44:48.016869 systemd-logind[1203]: Removed session 14. May 16 00:44:48.057283 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 34110 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:48.058501 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:48.061962 systemd-logind[1203]: New session 15 of user core. May 16 00:44:48.062865 systemd[1]: Started session-15.scope. May 16 00:44:48.768526 sshd[3425]: pam_unix(sshd:session): session closed for user core May 16 00:44:48.773975 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:34118.service. May 16 00:44:48.775446 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:34110.service: Deactivated successfully. May 16 00:44:48.777651 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:44:48.778882 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. May 16 00:44:48.780054 systemd-logind[1203]: Removed session 15. May 16 00:44:48.814700 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 34118 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:48.816141 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:48.819320 systemd-logind[1203]: New session 16 of user core. May 16 00:44:48.820182 systemd[1]: Started session-16.scope. May 16 00:44:49.034689 sshd[3447]: pam_unix(sshd:session): session closed for user core May 16 00:44:49.038587 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:34120.service. May 16 00:44:49.039174 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:34118.service: Deactivated successfully. May 16 00:44:49.039974 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:44:49.040821 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. May 16 00:44:49.042779 systemd-logind[1203]: Removed session 16. May 16 00:44:49.078675 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 34120 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:49.079891 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:49.082939 systemd-logind[1203]: New session 17 of user core. May 16 00:44:49.083769 systemd[1]: Started session-17.scope. May 16 00:44:49.191998 sshd[3460]: pam_unix(sshd:session): session closed for user core May 16 00:44:49.194427 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:34120.service: Deactivated successfully. May 16 00:44:49.195154 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:44:49.195703 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. May 16 00:44:49.196372 systemd-logind[1203]: Removed session 17. May 16 00:44:54.197445 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:49410.service. May 16 00:44:54.234547 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 49410 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:54.236009 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:54.239069 systemd-logind[1203]: New session 18 of user core. May 16 00:44:54.239926 systemd[1]: Started session-18.scope. May 16 00:44:54.345642 sshd[3477]: pam_unix(sshd:session): session closed for user core May 16 00:44:54.348149 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:49410.service: Deactivated successfully. May 16 00:44:54.348963 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:44:54.349453 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. May 16 00:44:54.350167 systemd-logind[1203]: Removed session 18. May 16 00:44:59.351454 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:49422.service. May 16 00:44:59.391167 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 49422 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:59.392478 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:59.395990 systemd-logind[1203]: New session 19 of user core. May 16 00:44:59.396910 systemd[1]: Started session-19.scope. May 16 00:44:59.514028 sshd[3492]: pam_unix(sshd:session): session closed for user core May 16 00:44:59.516457 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:49422.service: Deactivated successfully. May 16 00:44:59.517198 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:44:59.517684 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. May 16 00:44:59.518367 systemd-logind[1203]: Removed session 19. May 16 00:45:04.517661 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:45812.service. May 16 00:45:04.556240 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 45812 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:45:04.557532 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:45:04.560981 systemd-logind[1203]: New session 20 of user core. May 16 00:45:04.562123 systemd[1]: Started session-20.scope. May 16 00:45:04.668967 sshd[3508]: pam_unix(sshd:session): session closed for user core May 16 00:45:04.671594 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:45812.service: Deactivated successfully. May 16 00:45:04.672297 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:45:04.673000 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. May 16 00:45:04.673719 systemd-logind[1203]: Removed session 20. May 16 00:45:09.673199 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:45816.service. May 16 00:45:09.713280 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 45816 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:45:09.714495 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:45:09.718360 systemd-logind[1203]: New session 21 of user core. May 16 00:45:09.719027 systemd[1]: Started session-21.scope. May 16 00:45:09.840589 sshd[3522]: pam_unix(sshd:session): session closed for user core May 16 00:45:09.844601 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:45832.service. May 16 00:45:09.845175 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:45816.service: Deactivated successfully. May 16 00:45:09.846109 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:45:09.846658 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. May 16 00:45:09.847413 systemd-logind[1203]: Removed session 21. May 16 00:45:09.887912 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 45832 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:45:09.889318 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:45:09.892531 systemd-logind[1203]: New session 22 of user core. May 16 00:45:09.893369 systemd[1]: Started session-22.scope. May 16 00:45:12.361064 env[1215]: time="2025-05-16T00:45:12.360148363Z" level=info msg="StopContainer for \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\" with timeout 30 (s)" May 16 00:45:12.361064 env[1215]: time="2025-05-16T00:45:12.360559452Z" level=info msg="Stop container \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\" with signal terminated" May 16 00:45:12.375306 systemd[1]: cri-containerd-13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5.scope: Deactivated successfully. May 16 00:45:12.391352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5-rootfs.mount: Deactivated successfully. May 16 00:45:12.416052 env[1215]: time="2025-05-16T00:45:12.416006048Z" level=info msg="shim disconnected" id=13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5 May 16 00:45:12.416401 env[1215]: time="2025-05-16T00:45:12.416381980Z" level=warning msg="cleaning up after shim disconnected" id=13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5 namespace=k8s.io May 16 00:45:12.416483 env[1215]: time="2025-05-16T00:45:12.416469893Z" level=info msg="cleaning up dead shim" May 16 00:45:12.423985 env[1215]: time="2025-05-16T00:45:12.423942137Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3582 runtime=io.containerd.runc.v2\n" May 16 00:45:12.426128 env[1215]: time="2025-05-16T00:45:12.426092297Z" level=info msg="StopContainer for \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\" returns successfully" May 16 00:45:12.428093 env[1215]: time="2025-05-16T00:45:12.427975837Z" level=info msg="StopPodSandbox for \"81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed\"" May 16 00:45:12.428093 env[1215]: time="2025-05-16T00:45:12.428052832Z" level=info msg="Container to stop \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.429766 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed-shm.mount: Deactivated successfully. May 16 00:45:12.430655 env[1215]: time="2025-05-16T00:45:12.430298745Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:45:12.433548 env[1215]: time="2025-05-16T00:45:12.433470189Z" level=info msg="StopContainer for \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\" with timeout 2 (s)" May 16 00:45:12.433849 env[1215]: time="2025-05-16T00:45:12.433819283Z" level=info msg="Stop container \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\" with signal terminated" May 16 00:45:12.438253 systemd[1]: cri-containerd-81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed.scope: Deactivated successfully. May 16 00:45:12.439343 systemd-networkd[1045]: lxc_health: Link DOWN May 16 00:45:12.439350 systemd-networkd[1045]: lxc_health: Lost carrier May 16 00:45:12.460269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed-rootfs.mount: Deactivated successfully. May 16 00:45:12.471418 systemd[1]: cri-containerd-c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45.scope: Deactivated successfully. May 16 00:45:12.471727 systemd[1]: cri-containerd-c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45.scope: Consumed 6.608s CPU time. May 16 00:45:12.472558 env[1215]: time="2025-05-16T00:45:12.472519804Z" level=info msg="shim disconnected" id=81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed May 16 00:45:12.472911 env[1215]: time="2025-05-16T00:45:12.472890296Z" level=warning msg="cleaning up after shim disconnected" id=81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed namespace=k8s.io May 16 00:45:12.473439 env[1215]: time="2025-05-16T00:45:12.473416577Z" level=info msg="cleaning up dead shim" May 16 00:45:12.482137 env[1215]: time="2025-05-16T00:45:12.482107571Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3626 runtime=io.containerd.runc.v2\n" May 16 00:45:12.482537 env[1215]: time="2025-05-16T00:45:12.482509181Z" level=info msg="TearDown network for sandbox \"81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed\" successfully" May 16 00:45:12.482633 env[1215]: time="2025-05-16T00:45:12.482614933Z" level=info msg="StopPodSandbox for \"81acb6a3f11e72f999b94454693dfc8aa2d8a9415d027af1748d0a6df47569ed\" returns successfully" May 16 00:45:12.491051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45-rootfs.mount: Deactivated successfully. May 16 00:45:12.497856 env[1215]: time="2025-05-16T00:45:12.497792124Z" level=info msg="shim disconnected" id=c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45 May 16 00:45:12.498032 env[1215]: time="2025-05-16T00:45:12.498012548Z" level=warning msg="cleaning up after shim disconnected" id=c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45 namespace=k8s.io May 16 00:45:12.498099 env[1215]: time="2025-05-16T00:45:12.498087102Z" level=info msg="cleaning up dead shim" May 16 00:45:12.501697 kubelet[1918]: I0516 00:45:12.501655 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl62g\" (UniqueName: \"kubernetes.io/projected/a171ae39-e414-4f98-815b-5bfb1e604716-kube-api-access-dl62g\") pod \"a171ae39-e414-4f98-815b-5bfb1e604716\" (UID: \"a171ae39-e414-4f98-815b-5bfb1e604716\") " May 16 00:45:12.502023 kubelet[1918]: I0516 00:45:12.501715 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a171ae39-e414-4f98-815b-5bfb1e604716-cilium-config-path\") pod \"a171ae39-e414-4f98-815b-5bfb1e604716\" (UID: \"a171ae39-e414-4f98-815b-5bfb1e604716\") " May 16 00:45:12.510436 env[1215]: time="2025-05-16T00:45:12.510399466Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3648 runtime=io.containerd.runc.v2\n" May 16 00:45:12.513921 env[1215]: time="2025-05-16T00:45:12.513886447Z" level=info msg="StopContainer for \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\" returns successfully" May 16 00:45:12.515339 env[1215]: time="2025-05-16T00:45:12.515298902Z" level=info msg="StopPodSandbox for \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\"" May 16 00:45:12.515412 env[1215]: time="2025-05-16T00:45:12.515387095Z" level=info msg="Container to stop \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.515412 env[1215]: time="2025-05-16T00:45:12.515405654Z" level=info msg="Container to stop \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.515463 env[1215]: time="2025-05-16T00:45:12.515420133Z" level=info msg="Container to stop \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.515463 env[1215]: time="2025-05-16T00:45:12.515432452Z" level=info msg="Container to stop \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.515463 env[1215]: time="2025-05-16T00:45:12.515443131Z" level=info msg="Container to stop \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.518286 kubelet[1918]: I0516 00:45:12.518227 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a171ae39-e414-4f98-815b-5bfb1e604716-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a171ae39-e414-4f98-815b-5bfb1e604716" (UID: "a171ae39-e414-4f98-815b-5bfb1e604716"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:45:12.519237 kubelet[1918]: I0516 00:45:12.519189 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a171ae39-e414-4f98-815b-5bfb1e604716-kube-api-access-dl62g" (OuterVolumeSpecName: "kube-api-access-dl62g") pod "a171ae39-e414-4f98-815b-5bfb1e604716" (UID: "a171ae39-e414-4f98-815b-5bfb1e604716"). InnerVolumeSpecName "kube-api-access-dl62g". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:12.521500 systemd[1]: cri-containerd-107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e.scope: Deactivated successfully. May 16 00:45:12.540323 env[1215]: time="2025-05-16T00:45:12.540277644Z" level=info msg="shim disconnected" id=107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e May 16 00:45:12.540878 env[1215]: time="2025-05-16T00:45:12.540852601Z" level=warning msg="cleaning up after shim disconnected" id=107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e namespace=k8s.io May 16 00:45:12.540962 env[1215]: time="2025-05-16T00:45:12.540946154Z" level=info msg="cleaning up dead shim" May 16 00:45:12.547542 env[1215]: time="2025-05-16T00:45:12.547508026Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3680 runtime=io.containerd.runc.v2\n" May 16 00:45:12.548057 env[1215]: time="2025-05-16T00:45:12.548027347Z" level=info msg="TearDown network for sandbox \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" successfully" May 16 00:45:12.548156 env[1215]: time="2025-05-16T00:45:12.548137459Z" level=info msg="StopPodSandbox for \"107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e\" returns successfully" May 16 00:45:12.602733 kubelet[1918]: I0516 00:45:12.602680 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-cgroup\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.602733 kubelet[1918]: I0516 00:45:12.602723 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-hostproc\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.602951 kubelet[1918]: I0516 00:45:12.602756 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-etc-cni-netd\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.602951 kubelet[1918]: I0516 00:45:12.602774 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-kernel\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.602951 kubelet[1918]: I0516 00:45:12.602792 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cni-path\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.602951 kubelet[1918]: I0516 00:45:12.602848 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8j9w\" (UniqueName: \"kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-kube-api-access-t8j9w\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.602951 kubelet[1918]: I0516 00:45:12.602863 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-bpf-maps\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.602951 kubelet[1918]: I0516 00:45:12.602878 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-lib-modules\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.603090 kubelet[1918]: I0516 00:45:12.602905 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92aacb67-c782-4d58-a9f3-472898597620-clustermesh-secrets\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.603090 kubelet[1918]: I0516 00:45:12.602925 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-hubble-tls\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.603090 kubelet[1918]: I0516 00:45:12.602941 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-net\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.603090 kubelet[1918]: I0516 00:45:12.602961 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92aacb67-c782-4d58-a9f3-472898597620-cilium-config-path\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.603090 kubelet[1918]: I0516 00:45:12.602984 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-xtables-lock\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.603090 kubelet[1918]: I0516 00:45:12.603002 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-run\") pod \"92aacb67-c782-4d58-a9f3-472898597620\" (UID: \"92aacb67-c782-4d58-a9f3-472898597620\") " May 16 00:45:12.603245 kubelet[1918]: I0516 00:45:12.603036 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a171ae39-e414-4f98-815b-5bfb1e604716-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.603245 kubelet[1918]: I0516 00:45:12.603046 1918 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dl62g\" (UniqueName: \"kubernetes.io/projected/a171ae39-e414-4f98-815b-5bfb1e604716-kube-api-access-dl62g\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.603245 kubelet[1918]: I0516 00:45:12.603090 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.603245 kubelet[1918]: I0516 00:45:12.603135 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.603245 kubelet[1918]: I0516 00:45:12.603150 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-hostproc" (OuterVolumeSpecName: "hostproc") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.603356 kubelet[1918]: I0516 00:45:12.603164 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.603356 kubelet[1918]: I0516 00:45:12.603178 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.603356 kubelet[1918]: I0516 00:45:12.603192 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cni-path" (OuterVolumeSpecName: "cni-path") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.603709 kubelet[1918]: I0516 00:45:12.603541 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.603709 kubelet[1918]: I0516 00:45:12.603585 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.605447 kubelet[1918]: I0516 00:45:12.605402 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92aacb67-c782-4d58-a9f3-472898597620-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:45:12.605447 kubelet[1918]: I0516 00:45:12.605447 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.605562 kubelet[1918]: I0516 00:45:12.605466 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.606146 kubelet[1918]: I0516 00:45:12.606120 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:12.606255 kubelet[1918]: I0516 00:45:12.606182 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-kube-api-access-t8j9w" (OuterVolumeSpecName: "kube-api-access-t8j9w") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "kube-api-access-t8j9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:12.606595 kubelet[1918]: I0516 00:45:12.606549 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92aacb67-c782-4d58-a9f3-472898597620-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "92aacb67-c782-4d58-a9f3-472898597620" (UID: "92aacb67-c782-4d58-a9f3-472898597620"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:45:12.704186 kubelet[1918]: I0516 00:45:12.704152 1918 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704340 kubelet[1918]: I0516 00:45:12.704325 1918 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704405 kubelet[1918]: I0516 00:45:12.704395 1918 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704464 kubelet[1918]: I0516 00:45:12.704454 1918 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704523 kubelet[1918]: I0516 00:45:12.704513 1918 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92aacb67-c782-4d58-a9f3-472898597620-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704581 kubelet[1918]: I0516 00:45:12.704571 1918 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t8j9w\" (UniqueName: \"kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-kube-api-access-t8j9w\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704641 kubelet[1918]: I0516 00:45:12.704631 1918 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704701 kubelet[1918]: I0516 00:45:12.704689 1918 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92aacb67-c782-4d58-a9f3-472898597620-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704770 kubelet[1918]: I0516 00:45:12.704759 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92aacb67-c782-4d58-a9f3-472898597620-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704860 kubelet[1918]: I0516 00:45:12.704849 1918 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704924 kubelet[1918]: I0516 00:45:12.704913 1918 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.704982 kubelet[1918]: I0516 00:45:12.704974 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.705041 kubelet[1918]: I0516 00:45:12.705032 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:45:12.705102 kubelet[1918]: I0516 00:45:12.705092 1918 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92aacb67-c782-4d58-a9f3-472898597620-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:45:13.053418 kubelet[1918]: E0516 00:45:13.053310 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:13.063103 systemd[1]: Removed slice kubepods-besteffort-poda171ae39_e414_4f98_815b_5bfb1e604716.slice. May 16 00:45:13.064029 systemd[1]: Removed slice kubepods-burstable-pod92aacb67_c782_4d58_a9f3_472898597620.slice. May 16 00:45:13.064105 systemd[1]: kubepods-burstable-pod92aacb67_c782_4d58_a9f3_472898597620.slice: Consumed 6.820s CPU time. May 16 00:45:13.254892 kubelet[1918]: I0516 00:45:13.254833 1918 scope.go:117] "RemoveContainer" containerID="c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45" May 16 00:45:13.257700 env[1215]: time="2025-05-16T00:45:13.257424750Z" level=info msg="RemoveContainer for \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\"" May 16 00:45:13.262388 env[1215]: time="2025-05-16T00:45:13.262354928Z" level=info msg="RemoveContainer for \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\" returns successfully" May 16 00:45:13.262761 kubelet[1918]: I0516 00:45:13.262738 1918 scope.go:117] "RemoveContainer" containerID="2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6" May 16 00:45:13.263848 env[1215]: time="2025-05-16T00:45:13.263809668Z" level=info msg="RemoveContainer for \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\"" May 16 00:45:13.266730 env[1215]: time="2025-05-16T00:45:13.266624512Z" level=info msg="RemoveContainer for \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\" returns successfully" May 16 00:45:13.267027 kubelet[1918]: I0516 00:45:13.267005 1918 scope.go:117] "RemoveContainer" containerID="5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e" May 16 00:45:13.271116 env[1215]: time="2025-05-16T00:45:13.271075284Z" level=info msg="RemoveContainer for \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\"" May 16 00:45:13.273828 env[1215]: time="2025-05-16T00:45:13.273712221Z" level=info msg="RemoveContainer for \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\" returns successfully" May 16 00:45:13.273962 kubelet[1918]: I0516 00:45:13.273922 1918 scope.go:117] "RemoveContainer" containerID="1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3" May 16 00:45:13.275131 env[1215]: time="2025-05-16T00:45:13.275108485Z" level=info msg="RemoveContainer for \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\"" May 16 00:45:13.278348 env[1215]: time="2025-05-16T00:45:13.278310503Z" level=info msg="RemoveContainer for \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\" returns successfully" May 16 00:45:13.278538 kubelet[1918]: I0516 00:45:13.278508 1918 scope.go:117] "RemoveContainer" containerID="4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98" May 16 00:45:13.279520 env[1215]: time="2025-05-16T00:45:13.279481342Z" level=info msg="RemoveContainer for \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\"" May 16 00:45:13.282239 env[1215]: time="2025-05-16T00:45:13.282200353Z" level=info msg="RemoveContainer for \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\" returns successfully" May 16 00:45:13.282405 kubelet[1918]: I0516 00:45:13.282375 1918 scope.go:117] "RemoveContainer" containerID="c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45" May 16 00:45:13.282633 env[1215]: time="2025-05-16T00:45:13.282564728Z" level=error msg="ContainerStatus for \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\": not found" May 16 00:45:13.282766 kubelet[1918]: E0516 00:45:13.282730 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\": not found" containerID="c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45" May 16 00:45:13.283964 kubelet[1918]: I0516 00:45:13.283842 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45"} err="failed to get container status \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\": rpc error: code = NotFound desc = an error occurred when try to find container \"c12394a4f4f9e846ccb94722788ce488f09875b020fadf3a531c6aa0b28f7d45\": not found" May 16 00:45:13.284004 kubelet[1918]: I0516 00:45:13.283967 1918 scope.go:117] "RemoveContainer" containerID="2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6" May 16 00:45:13.284250 env[1215]: time="2025-05-16T00:45:13.284187375Z" level=error msg="ContainerStatus for \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\": not found" May 16 00:45:13.284429 kubelet[1918]: E0516 00:45:13.284404 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\": not found" containerID="2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6" May 16 00:45:13.284457 kubelet[1918]: I0516 00:45:13.284431 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6"} err="failed to get container status \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a3c64bb808e319003d21d73bdd258db41eb1bbd6763536bb40b06e21bef5ed6\": not found" May 16 00:45:13.284484 kubelet[1918]: I0516 00:45:13.284458 1918 scope.go:117] "RemoveContainer" containerID="5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e" May 16 00:45:13.284657 env[1215]: time="2025-05-16T00:45:13.284610146Z" level=error msg="ContainerStatus for \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\": not found" May 16 00:45:13.284814 kubelet[1918]: E0516 00:45:13.284781 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\": not found" containerID="5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e" May 16 00:45:13.284845 kubelet[1918]: I0516 00:45:13.284822 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e"} err="failed to get container status \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bff5ba6c396475f1344eeea564ce8d331daaa4093efe4a9a11cd28387bb007e\": not found" May 16 00:45:13.284845 kubelet[1918]: I0516 00:45:13.284839 1918 scope.go:117] "RemoveContainer" containerID="1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3" May 16 00:45:13.285065 env[1215]: time="2025-05-16T00:45:13.285018518Z" level=error msg="ContainerStatus for \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\": not found" May 16 00:45:13.285180 kubelet[1918]: E0516 00:45:13.285159 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\": not found" containerID="1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3" May 16 00:45:13.285214 kubelet[1918]: I0516 00:45:13.285200 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3"} err="failed to get container status \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f6b600a01e0fe3c794b0625b90fdbd0cde9d50f127aff17da2d8917b9fc3fa3\": not found" May 16 00:45:13.285242 kubelet[1918]: I0516 00:45:13.285217 1918 scope.go:117] "RemoveContainer" containerID="4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98" May 16 00:45:13.285401 env[1215]: time="2025-05-16T00:45:13.285359334Z" level=error msg="ContainerStatus for \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\": not found" May 16 00:45:13.285502 kubelet[1918]: E0516 00:45:13.285483 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\": not found" containerID="4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98" May 16 00:45:13.285538 kubelet[1918]: I0516 00:45:13.285506 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98"} err="failed to get container status \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f6e19e949c75c8fbe8be972c08dd8afa790ef05a53082186ee5cac22ed12c98\": not found" May 16 00:45:13.285538 kubelet[1918]: I0516 00:45:13.285531 1918 scope.go:117] "RemoveContainer" containerID="13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5" May 16 00:45:13.286559 env[1215]: time="2025-05-16T00:45:13.286532773Z" level=info msg="RemoveContainer for \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\"" May 16 00:45:13.288563 env[1215]: time="2025-05-16T00:45:13.288527315Z" level=info msg="RemoveContainer for \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\" returns successfully" May 16 00:45:13.288725 kubelet[1918]: I0516 00:45:13.288679 1918 scope.go:117] "RemoveContainer" containerID="13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5" May 16 00:45:13.288954 env[1215]: time="2025-05-16T00:45:13.288894489Z" level=error msg="ContainerStatus for \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\": not found" May 16 00:45:13.289098 kubelet[1918]: E0516 00:45:13.289058 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\": not found" containerID="13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5" May 16 00:45:13.289174 kubelet[1918]: I0516 00:45:13.289101 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5"} err="failed to get container status \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\": rpc error: code = NotFound desc = an error occurred when try to find container \"13f23ddec814fd2791ecb8222132118297f05a7e5cbccfde8a57150d97160ab5\": not found" May 16 00:45:13.368788 systemd[1]: var-lib-kubelet-pods-a171ae39\x2de414\x2d4f98\x2d815b\x2d5bfb1e604716-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddl62g.mount: Deactivated successfully. May 16 00:45:13.368906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e-rootfs.mount: Deactivated successfully. May 16 00:45:13.368968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-107576770b758cbcc579162e5e07d5e09ea6b3914edc7b80eff664d22b268a9e-shm.mount: Deactivated successfully. May 16 00:45:13.369019 systemd[1]: var-lib-kubelet-pods-92aacb67\x2dc782\x2d4d58\x2da9f3\x2d472898597620-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt8j9w.mount: Deactivated successfully. May 16 00:45:13.369068 systemd[1]: var-lib-kubelet-pods-92aacb67\x2dc782\x2d4d58\x2da9f3\x2d472898597620-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:45:13.369117 systemd[1]: var-lib-kubelet-pods-92aacb67\x2dc782\x2d4d58\x2da9f3\x2d472898597620-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:45:14.312135 sshd[3535]: pam_unix(sshd:session): session closed for user core May 16 00:45:14.315734 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:34610.service. May 16 00:45:14.316245 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:45832.service: Deactivated successfully. May 16 00:45:14.316969 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:45:14.317124 systemd[1]: session-22.scope: Consumed 1.778s CPU time. May 16 00:45:14.317677 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. May 16 00:45:14.318487 systemd-logind[1203]: Removed session 22. May 16 00:45:14.360421 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 34610 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:45:14.361553 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:45:14.364964 systemd-logind[1203]: New session 23 of user core. May 16 00:45:14.365792 systemd[1]: Started session-23.scope. May 16 00:45:15.055647 kubelet[1918]: I0516 00:45:15.055606 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92aacb67-c782-4d58-a9f3-472898597620" path="/var/lib/kubelet/pods/92aacb67-c782-4d58-a9f3-472898597620/volumes" May 16 00:45:15.056372 kubelet[1918]: I0516 00:45:15.056351 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a171ae39-e414-4f98-815b-5bfb1e604716" path="/var/lib/kubelet/pods/a171ae39-e414-4f98-815b-5bfb1e604716/volumes" May 16 00:45:15.068378 sshd[3697]: pam_unix(sshd:session): session closed for user core May 16 00:45:15.072108 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:34612.service. May 16 00:45:15.074419 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:34610.service: Deactivated successfully. May 16 00:45:15.075130 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:45:15.075617 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. May 16 00:45:15.082954 systemd-logind[1203]: Removed session 23. May 16 00:45:15.101817 kubelet[1918]: I0516 00:45:15.097091 1918 memory_manager.go:355] "RemoveStaleState removing state" podUID="92aacb67-c782-4d58-a9f3-472898597620" containerName="cilium-agent" May 16 00:45:15.101817 kubelet[1918]: I0516 00:45:15.097123 1918 memory_manager.go:355] "RemoveStaleState removing state" podUID="a171ae39-e414-4f98-815b-5bfb1e604716" containerName="cilium-operator" May 16 00:45:15.112851 systemd[1]: Created slice kubepods-burstable-poda00181aa_fd42_472f_ad63_4b914f03ca29.slice. May 16 00:45:15.116649 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 34612 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:45:15.118313 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:45:15.119000 kubelet[1918]: I0516 00:45:15.118652 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cni-path\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119000 kubelet[1918]: I0516 00:45:15.118688 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-clustermesh-secrets\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119000 kubelet[1918]: I0516 00:45:15.118709 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-config-path\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119000 kubelet[1918]: I0516 00:45:15.118728 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-ipsec-secrets\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119000 kubelet[1918]: I0516 00:45:15.118744 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-etc-cni-netd\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119000 kubelet[1918]: I0516 00:45:15.118759 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-lib-modules\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119185 kubelet[1918]: I0516 00:45:15.118774 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-bpf-maps\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119185 kubelet[1918]: I0516 00:45:15.118788 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-hostproc\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119185 kubelet[1918]: I0516 00:45:15.118816 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgfr\" (UniqueName: \"kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-kube-api-access-ghgfr\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119185 kubelet[1918]: I0516 00:45:15.118835 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-run\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119185 kubelet[1918]: I0516 00:45:15.118851 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-cgroup\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119185 kubelet[1918]: I0516 00:45:15.118867 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-net\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119312 kubelet[1918]: I0516 00:45:15.118882 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-kernel\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119312 kubelet[1918]: I0516 00:45:15.118896 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-hubble-tls\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.119312 kubelet[1918]: I0516 00:45:15.118913 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-xtables-lock\") pod \"cilium-zl5w8\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " pod="kube-system/cilium-zl5w8" May 16 00:45:15.123422 systemd-logind[1203]: New session 24 of user core. May 16 00:45:15.124065 kubelet[1918]: E0516 00:45:15.123944 1918 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:45:15.124193 systemd[1]: Started session-24.scope. May 16 00:45:15.251218 sshd[3709]: pam_unix(sshd:session): session closed for user core May 16 00:45:15.255160 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:34614.service. May 16 00:45:15.255670 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:34612.service: Deactivated successfully. May 16 00:45:15.257056 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:45:15.257636 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. May 16 00:45:15.259143 systemd-logind[1203]: Removed session 24. May 16 00:45:15.264457 kubelet[1918]: E0516 00:45:15.264426 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:15.265480 env[1215]: time="2025-05-16T00:45:15.265161386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zl5w8,Uid:a00181aa-fd42-472f-ad63-4b914f03ca29,Namespace:kube-system,Attempt:0,}" May 16 00:45:15.281199 env[1215]: time="2025-05-16T00:45:15.281135754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:45:15.281319 env[1215]: time="2025-05-16T00:45:15.281187311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:45:15.281397 env[1215]: time="2025-05-16T00:45:15.281372180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:45:15.281621 env[1215]: time="2025-05-16T00:45:15.281584727Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8 pid=3740 runtime=io.containerd.runc.v2 May 16 00:45:15.291313 systemd[1]: Started cri-containerd-a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8.scope. May 16 00:45:15.301178 sshd[3730]: Accepted publickey for core from 10.0.0.1 port 34614 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:45:15.302567 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:45:15.306585 systemd[1]: Started session-25.scope. May 16 00:45:15.307221 systemd-logind[1203]: New session 25 of user core. May 16 00:45:15.321233 env[1215]: time="2025-05-16T00:45:15.321191367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zl5w8,Uid:a00181aa-fd42-472f-ad63-4b914f03ca29,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8\"" May 16 00:45:15.321929 kubelet[1918]: E0516 00:45:15.321908 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:15.326096 env[1215]: time="2025-05-16T00:45:15.325706778Z" level=info msg="CreateContainer within sandbox \"a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:45:15.334883 env[1215]: time="2025-05-16T00:45:15.334832514Z" level=info msg="CreateContainer within sandbox \"a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\"" May 16 00:45:15.335302 env[1215]: time="2025-05-16T00:45:15.335272608Z" level=info msg="StartContainer for \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\"" May 16 00:45:15.349203 systemd[1]: Started cri-containerd-372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f.scope. May 16 00:45:15.365884 systemd[1]: cri-containerd-372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f.scope: Deactivated successfully. May 16 00:45:15.380433 env[1215]: time="2025-05-16T00:45:15.380364921Z" level=info msg="shim disconnected" id=372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f May 16 00:45:15.380433 env[1215]: time="2025-05-16T00:45:15.380416958Z" level=warning msg="cleaning up after shim disconnected" id=372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f namespace=k8s.io May 16 00:45:15.380433 env[1215]: time="2025-05-16T00:45:15.380428477Z" level=info msg="cleaning up dead shim" May 16 00:45:15.387784 env[1215]: time="2025-05-16T00:45:15.387731922Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3804 runtime=io.containerd.runc.v2\ntime=\"2025-05-16T00:45:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 16 00:45:15.388097 env[1215]: time="2025-05-16T00:45:15.387993867Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" May 16 00:45:15.388328 env[1215]: time="2025-05-16T00:45:15.388252411Z" level=error msg="Failed to pipe stdout of container \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\"" error="reading from a closed fifo" May 16 00:45:15.388471 env[1215]: time="2025-05-16T00:45:15.388435040Z" level=error msg="Failed to pipe stderr of container \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\"" error="reading from a closed fifo" May 16 00:45:15.390508 env[1215]: time="2025-05-16T00:45:15.390462159Z" level=error msg="StartContainer for \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 16 00:45:15.390877 kubelet[1918]: E0516 00:45:15.390825 1918 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f" May 16 00:45:15.391885 kubelet[1918]: E0516 00:45:15.391205 1918 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 16 00:45:15.391885 kubelet[1918]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 16 00:45:15.391885 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 16 00:45:15.391885 kubelet[1918]: rm /hostbin/cilium-mount May 16 00:45:15.392051 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghgfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zl5w8_kube-system(a00181aa-fd42-472f-ad63-4b914f03ca29): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 16 00:45:15.392051 kubelet[1918]: > logger="UnhandledError" May 16 00:45:15.392964 kubelet[1918]: E0516 00:45:15.392290 1918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zl5w8" podUID="a00181aa-fd42-472f-ad63-4b914f03ca29" May 16 00:45:16.275538 env[1215]: time="2025-05-16T00:45:16.275458264Z" level=info msg="StopPodSandbox for \"a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8\"" May 16 00:45:16.275883 env[1215]: time="2025-05-16T00:45:16.275521740Z" level=info msg="Container to stop \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:16.277347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8-shm.mount: Deactivated successfully. May 16 00:45:16.283314 systemd[1]: cri-containerd-a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8.scope: Deactivated successfully. May 16 00:45:16.303775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8-rootfs.mount: Deactivated successfully. May 16 00:45:16.308097 env[1215]: time="2025-05-16T00:45:16.307948438Z" level=info msg="shim disconnected" id=a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8 May 16 00:45:16.308097 env[1215]: time="2025-05-16T00:45:16.307993556Z" level=warning msg="cleaning up after shim disconnected" id=a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8 namespace=k8s.io May 16 00:45:16.308097 env[1215]: time="2025-05-16T00:45:16.308002715Z" level=info msg="cleaning up dead shim" May 16 00:45:16.314615 env[1215]: time="2025-05-16T00:45:16.314567874Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3840 runtime=io.containerd.runc.v2\n" May 16 00:45:16.314875 env[1215]: time="2025-05-16T00:45:16.314851019Z" level=info msg="TearDown network for sandbox \"a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8\" successfully" May 16 00:45:16.314916 env[1215]: time="2025-05-16T00:45:16.314875657Z" level=info msg="StopPodSandbox for \"a1ebaff556d52e45eb43431fd88cf4a7627a4b962bd69a693db10831fa4a77a8\" returns successfully" May 16 00:45:16.427463 kubelet[1918]: I0516 00:45:16.427425 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-cgroup\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427463 kubelet[1918]: I0516 00:45:16.427474 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-clustermesh-secrets\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427501 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-ipsec-secrets\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427523 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgfr\" (UniqueName: \"kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-kube-api-access-ghgfr\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427539 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-net\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427558 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-hubble-tls\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427573 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-bpf-maps\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427600 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-etc-cni-netd\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427615 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-hostproc\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427629 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-xtables-lock\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427644 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cni-path\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427660 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-config-path\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427677 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-lib-modules\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427691 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-run\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427716 1918 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-kernel\") pod \"a00181aa-fd42-472f-ad63-4b914f03ca29\" (UID: \"a00181aa-fd42-472f-ad63-4b914f03ca29\") " May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427781 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.427900 kubelet[1918]: I0516 00:45:16.427826 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.428252 kubelet[1918]: I0516 00:45:16.428150 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.428373 kubelet[1918]: I0516 00:45:16.428353 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.429506 kubelet[1918]: I0516 00:45:16.428705 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-hostproc" (OuterVolumeSpecName: "hostproc") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.429506 kubelet[1918]: I0516 00:45:16.428721 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.429506 kubelet[1918]: I0516 00:45:16.428755 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.429506 kubelet[1918]: I0516 00:45:16.428776 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.429506 kubelet[1918]: I0516 00:45:16.428790 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cni-path" (OuterVolumeSpecName: "cni-path") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.429506 kubelet[1918]: I0516 00:45:16.428791 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.430907 kubelet[1918]: I0516 00:45:16.430847 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:16.431415 kubelet[1918]: I0516 00:45:16.431394 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:45:16.431604 kubelet[1918]: I0516 00:45:16.431567 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:45:16.431960 systemd[1]: var-lib-kubelet-pods-a00181aa\x2dfd42\x2d472f\x2dad63\x2d4b914f03ca29-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:45:16.432065 systemd[1]: var-lib-kubelet-pods-a00181aa\x2dfd42\x2d472f\x2dad63\x2d4b914f03ca29-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 16 00:45:16.432955 kubelet[1918]: I0516 00:45:16.432926 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:45:16.433156 kubelet[1918]: I0516 00:45:16.433135 1918 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-kube-api-access-ghgfr" (OuterVolumeSpecName: "kube-api-access-ghgfr") pod "a00181aa-fd42-472f-ad63-4b914f03ca29" (UID: "a00181aa-fd42-472f-ad63-4b914f03ca29"). InnerVolumeSpecName "kube-api-access-ghgfr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:16.434096 systemd[1]: var-lib-kubelet-pods-a00181aa\x2dfd42\x2d472f\x2dad63\x2d4b914f03ca29-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dghgfr.mount: Deactivated successfully. May 16 00:45:16.434182 systemd[1]: var-lib-kubelet-pods-a00181aa\x2dfd42\x2d472f\x2dad63\x2d4b914f03ca29-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:45:16.528568 kubelet[1918]: I0516 00:45:16.527850 1918 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.528568 kubelet[1918]: I0516 00:45:16.528562 1918 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.528568 kubelet[1918]: I0516 00:45:16.528613 1918 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528756 1918 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528772 1918 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528781 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528830 1918 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528842 1918 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528850 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528858 1918 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528895 1918 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528908 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528917 1918 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ghgfr\" (UniqueName: \"kubernetes.io/projected/a00181aa-fd42-472f-ad63-4b914f03ca29-kube-api-access-ghgfr\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528925 1918 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:45:16.529205 kubelet[1918]: I0516 00:45:16.528934 1918 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a00181aa-fd42-472f-ad63-4b914f03ca29-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:45:17.059337 systemd[1]: Removed slice kubepods-burstable-poda00181aa_fd42_472f_ad63_4b914f03ca29.slice. May 16 00:45:17.278725 kubelet[1918]: I0516 00:45:17.278674 1918 scope.go:117] "RemoveContainer" containerID="372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f" May 16 00:45:17.280291 env[1215]: time="2025-05-16T00:45:17.280251004Z" level=info msg="RemoveContainer for \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\"" May 16 00:45:17.305123 env[1215]: time="2025-05-16T00:45:17.305067031Z" level=info msg="RemoveContainer for \"372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f\" returns successfully" May 16 00:45:17.328708 kubelet[1918]: I0516 00:45:17.328601 1918 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:45:17Z","lastTransitionTime":"2025-05-16T00:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:45:17.330013 kubelet[1918]: I0516 00:45:17.329985 1918 memory_manager.go:355] "RemoveStaleState removing state" podUID="a00181aa-fd42-472f-ad63-4b914f03ca29" containerName="mount-cgroup" May 16 00:45:17.337390 systemd[1]: Created slice kubepods-burstable-pod678f1dbc_25d9_41f8_bcb3_368c46d1fddd.slice. May 16 00:45:17.434509 kubelet[1918]: I0516 00:45:17.434465 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-cilium-config-path\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434509 kubelet[1918]: I0516 00:45:17.434504 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-host-proc-sys-kernel\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434562 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-hubble-tls\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434585 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-cni-path\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434603 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-cilium-ipsec-secrets\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434619 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-etc-cni-netd\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434642 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-clustermesh-secrets\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434657 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-host-proc-sys-net\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434678 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fhvm\" (UniqueName: \"kubernetes.io/projected/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-kube-api-access-6fhvm\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434716 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-cilium-run\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434764 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-bpf-maps\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434814 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-hostproc\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434833 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-lib-modules\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434848 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-xtables-lock\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.434892 kubelet[1918]: I0516 00:45:17.434871 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/678f1dbc-25d9-41f8-bcb3-368c46d1fddd-cilium-cgroup\") pod \"cilium-8vl7p\" (UID: \"678f1dbc-25d9-41f8-bcb3-368c46d1fddd\") " pod="kube-system/cilium-8vl7p" May 16 00:45:17.639857 kubelet[1918]: E0516 00:45:17.639734 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:17.640772 env[1215]: time="2025-05-16T00:45:17.640728888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8vl7p,Uid:678f1dbc-25d9-41f8-bcb3-368c46d1fddd,Namespace:kube-system,Attempt:0,}" May 16 00:45:17.652477 env[1215]: time="2025-05-16T00:45:17.652395019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:45:17.652477 env[1215]: time="2025-05-16T00:45:17.652441097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:45:17.652664 env[1215]: time="2025-05-16T00:45:17.652623888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:45:17.652912 env[1215]: time="2025-05-16T00:45:17.652871355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d pid=3868 runtime=io.containerd.runc.v2 May 16 00:45:17.664876 systemd[1]: Started cri-containerd-7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d.scope. May 16 00:45:17.690342 env[1215]: time="2025-05-16T00:45:17.690294186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8vl7p,Uid:678f1dbc-25d9-41f8-bcb3-368c46d1fddd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\"" May 16 00:45:17.691017 kubelet[1918]: E0516 00:45:17.690993 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:17.694671 env[1215]: time="2025-05-16T00:45:17.694636367Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:45:17.704912 env[1215]: time="2025-05-16T00:45:17.704858771Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614\"" May 16 00:45:17.705574 env[1215]: time="2025-05-16T00:45:17.705443462Z" level=info msg="StartContainer for \"d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614\"" May 16 00:45:17.720833 systemd[1]: Started cri-containerd-d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614.scope. May 16 00:45:17.751583 env[1215]: time="2025-05-16T00:45:17.751541695Z" level=info msg="StartContainer for \"d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614\" returns successfully" May 16 00:45:17.759552 systemd[1]: cri-containerd-d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614.scope: Deactivated successfully. May 16 00:45:17.781681 env[1215]: time="2025-05-16T00:45:17.781638016Z" level=info msg="shim disconnected" id=d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614 May 16 00:45:17.781681 env[1215]: time="2025-05-16T00:45:17.781681414Z" level=warning msg="cleaning up after shim disconnected" id=d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614 namespace=k8s.io May 16 00:45:17.781902 env[1215]: time="2025-05-16T00:45:17.781691533Z" level=info msg="cleaning up dead shim" May 16 00:45:17.788067 env[1215]: time="2025-05-16T00:45:17.788027013Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3952 runtime=io.containerd.runc.v2\n" May 16 00:45:18.282917 kubelet[1918]: E0516 00:45:18.282866 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:18.284690 env[1215]: time="2025-05-16T00:45:18.284635798Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:45:18.303717 env[1215]: time="2025-05-16T00:45:18.303661120Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142\"" May 16 00:45:18.304310 env[1215]: time="2025-05-16T00:45:18.304282891Z" level=info msg="StartContainer for \"c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142\"" May 16 00:45:18.319891 systemd[1]: Started cri-containerd-c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142.scope. May 16 00:45:18.348995 env[1215]: time="2025-05-16T00:45:18.348950111Z" level=info msg="StartContainer for \"c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142\" returns successfully" May 16 00:45:18.357283 systemd[1]: cri-containerd-c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142.scope: Deactivated successfully. May 16 00:45:18.376893 env[1215]: time="2025-05-16T00:45:18.376850623Z" level=info msg="shim disconnected" id=c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142 May 16 00:45:18.377139 env[1215]: time="2025-05-16T00:45:18.377120531Z" level=warning msg="cleaning up after shim disconnected" id=c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142 namespace=k8s.io May 16 00:45:18.377224 env[1215]: time="2025-05-16T00:45:18.377210767Z" level=info msg="cleaning up dead shim" May 16 00:45:18.383943 env[1215]: time="2025-05-16T00:45:18.383906858Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4013 runtime=io.containerd.runc.v2\n" May 16 00:45:18.485598 kubelet[1918]: W0516 00:45:18.485559 1918 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda00181aa_fd42_472f_ad63_4b914f03ca29.slice/cri-containerd-372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f.scope WatchSource:0}: container "372b9ee62e1b02fb4eafa56a7b79cabe139b115d47cafd99bb0f5b1d7957249f" in namespace "k8s.io": not found May 16 00:45:19.053472 kubelet[1918]: E0516 00:45:19.053350 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:19.055693 kubelet[1918]: I0516 00:45:19.055429 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00181aa-fd42-472f-ad63-4b914f03ca29" path="/var/lib/kubelet/pods/a00181aa-fd42-472f-ad63-4b914f03ca29/volumes" May 16 00:45:19.285556 kubelet[1918]: E0516 00:45:19.285507 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:19.287327 env[1215]: time="2025-05-16T00:45:19.287280207Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:45:19.297420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463621197.mount: Deactivated successfully. May 16 00:45:19.301367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238888699.mount: Deactivated successfully. May 16 00:45:19.304245 env[1215]: time="2025-05-16T00:45:19.304160179Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2\"" May 16 00:45:19.305067 env[1215]: time="2025-05-16T00:45:19.304738955Z" level=info msg="StartContainer for \"fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2\"" May 16 00:45:19.320559 systemd[1]: Started cri-containerd-fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2.scope. May 16 00:45:19.349723 env[1215]: time="2025-05-16T00:45:19.349679751Z" level=info msg="StartContainer for \"fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2\" returns successfully" May 16 00:45:19.353777 systemd[1]: cri-containerd-fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2.scope: Deactivated successfully. May 16 00:45:19.373180 env[1215]: time="2025-05-16T00:45:19.373131448Z" level=info msg="shim disconnected" id=fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2 May 16 00:45:19.373461 env[1215]: time="2025-05-16T00:45:19.373432955Z" level=warning msg="cleaning up after shim disconnected" id=fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2 namespace=k8s.io May 16 00:45:19.373550 env[1215]: time="2025-05-16T00:45:19.373534791Z" level=info msg="cleaning up dead shim" May 16 00:45:19.379861 env[1215]: time="2025-05-16T00:45:19.379824327Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4070 runtime=io.containerd.runc.v2\n" May 16 00:45:20.124933 kubelet[1918]: E0516 00:45:20.124896 1918 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:45:20.292057 kubelet[1918]: E0516 00:45:20.291834 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:20.296784 env[1215]: time="2025-05-16T00:45:20.296731882Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:45:20.307741 env[1215]: time="2025-05-16T00:45:20.307656509Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835\"" May 16 00:45:20.308814 env[1215]: time="2025-05-16T00:45:20.308764427Z" level=info msg="StartContainer for \"a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835\"" May 16 00:45:20.328973 systemd[1]: Started cri-containerd-a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835.scope. May 16 00:45:20.355473 env[1215]: time="2025-05-16T00:45:20.355431941Z" level=info msg="StartContainer for \"a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835\" returns successfully" May 16 00:45:20.355521 systemd[1]: cri-containerd-a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835.scope: Deactivated successfully. May 16 00:45:20.375004 env[1215]: time="2025-05-16T00:45:20.374895724Z" level=info msg="shim disconnected" id=a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835 May 16 00:45:20.375004 env[1215]: time="2025-05-16T00:45:20.374942442Z" level=warning msg="cleaning up after shim disconnected" id=a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835 namespace=k8s.io May 16 00:45:20.375004 env[1215]: time="2025-05-16T00:45:20.374951602Z" level=info msg="cleaning up dead shim" May 16 00:45:20.380958 env[1215]: time="2025-05-16T00:45:20.380917696Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" May 16 00:45:20.539649 systemd[1]: run-containerd-runc-k8s.io-a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835-runc.PfQ6fs.mount: Deactivated successfully. May 16 00:45:20.539753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835-rootfs.mount: Deactivated successfully. May 16 00:45:21.294773 kubelet[1918]: E0516 00:45:21.294726 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:21.296840 env[1215]: time="2025-05-16T00:45:21.296785595Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:45:21.310109 env[1215]: time="2025-05-16T00:45:21.309878711Z" level=info msg="CreateContainer within sandbox \"7bbb4aecb163499f66b77883487849c271b9c001585ce19aecf85c803b3d5f3d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98caeb7c7d24fffb2b32645c12851103f2004bdc5f4414b48dd6cb4400e415b5\"" May 16 00:45:21.310188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853279271.mount: Deactivated successfully. May 16 00:45:21.310548 env[1215]: time="2025-05-16T00:45:21.310521490Z" level=info msg="StartContainer for \"98caeb7c7d24fffb2b32645c12851103f2004bdc5f4414b48dd6cb4400e415b5\"" May 16 00:45:21.326700 systemd[1]: Started cri-containerd-98caeb7c7d24fffb2b32645c12851103f2004bdc5f4414b48dd6cb4400e415b5.scope. May 16 00:45:21.360333 env[1215]: time="2025-05-16T00:45:21.358462264Z" level=info msg="StartContainer for \"98caeb7c7d24fffb2b32645c12851103f2004bdc5f4414b48dd6cb4400e415b5\" returns successfully" May 16 00:45:21.539720 systemd[1]: run-containerd-runc-k8s.io-98caeb7c7d24fffb2b32645c12851103f2004bdc5f4414b48dd6cb4400e415b5-runc.jC9NYZ.mount: Deactivated successfully. May 16 00:45:21.598512 kubelet[1918]: W0516 00:45:21.596336 1918 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod678f1dbc_25d9_41f8_bcb3_368c46d1fddd.slice/cri-containerd-d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614.scope WatchSource:0}: task d47405a38bc3f6630495f2a1a28a932b450d84052ae7832e0175caef76cfe614 not found: not found May 16 00:45:21.610824 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 16 00:45:22.298837 kubelet[1918]: E0516 00:45:22.298784 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:23.640480 kubelet[1918]: E0516 00:45:23.640394 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:24.434137 systemd-networkd[1045]: lxc_health: Link UP May 16 00:45:24.446820 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:45:24.447452 systemd-networkd[1045]: lxc_health: Gained carrier May 16 00:45:24.703841 kubelet[1918]: W0516 00:45:24.703808 1918 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod678f1dbc_25d9_41f8_bcb3_368c46d1fddd.slice/cri-containerd-c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142.scope WatchSource:0}: task c4ccd1035326d65a9f94c35d619b89d978a44b9aea2a1952e83d041f0a007142 not found: not found May 16 00:45:25.641969 kubelet[1918]: E0516 00:45:25.641938 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:25.658575 kubelet[1918]: I0516 00:45:25.658522 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8vl7p" podStartSLOduration=8.658504568 podStartE2EDuration="8.658504568s" podCreationTimestamp="2025-05-16 00:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:45:22.314125469 +0000 UTC m=+87.342964308" watchObservedRunningTime="2025-05-16 00:45:25.658504568 +0000 UTC m=+90.687343367" May 16 00:45:25.726461 systemd[1]: run-containerd-runc-k8s.io-98caeb7c7d24fffb2b32645c12851103f2004bdc5f4414b48dd6cb4400e415b5-runc.SMk9oB.mount: Deactivated successfully. May 16 00:45:26.305445 kubelet[1918]: E0516 00:45:26.305412 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:26.405959 systemd-networkd[1045]: lxc_health: Gained IPv6LL May 16 00:45:27.053064 kubelet[1918]: E0516 00:45:27.053024 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:27.306903 kubelet[1918]: E0516 00:45:27.306768 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:27.811513 kubelet[1918]: W0516 00:45:27.811466 1918 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod678f1dbc_25d9_41f8_bcb3_368c46d1fddd.slice/cri-containerd-fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2.scope WatchSource:0}: task fe46a74da16c7dae9b22751c5590a5757070cf977ecf85182cfaf7e3e9421de2 not found: not found May 16 00:45:30.042787 sshd[3730]: pam_unix(sshd:session): session closed for user core May 16 00:45:30.045472 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:34614.service: Deactivated successfully. May 16 00:45:30.046211 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:45:30.046758 systemd-logind[1203]: Session 25 logged out. Waiting for processes to exit. May 16 00:45:30.047397 systemd-logind[1203]: Removed session 25. May 16 00:45:30.917570 kubelet[1918]: W0516 00:45:30.917532 1918 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod678f1dbc_25d9_41f8_bcb3_368c46d1fddd.slice/cri-containerd-a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835.scope WatchSource:0}: task a0b8a62b12a0fe8388a681ad57b832c819323dcfbf7cd83036e9ff3c95d7b835 not found: not found