Aug 12 23:49:32.722132 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:49:32.722152 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Aug 12 22:50:30 -00 2025 Aug 12 23:49:32.722160 kernel: efi: EFI v2.70 by EDK II Aug 12 23:49:32.722165 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Aug 12 23:49:32.722171 kernel: random: crng init done Aug 12 23:49:32.722176 kernel: ACPI: Early table checksum verification disabled Aug 12 23:49:32.722183 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Aug 12 23:49:32.722190 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:49:32.722195 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722201 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722207 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722212 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722217 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722223 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722231 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722237 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722243 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:49:32.722248 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:49:32.722254 kernel: NUMA: Failed to initialise from firmware Aug 12 23:49:32.722260 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:49:32.722266 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Aug 12 23:49:32.722272 kernel: Zone ranges: Aug 12 23:49:32.722278 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:49:32.722284 kernel: DMA32 empty Aug 12 23:49:32.722322 kernel: Normal empty Aug 12 23:49:32.722328 kernel: Movable zone start for each node Aug 12 23:49:32.722334 kernel: Early memory node ranges Aug 12 23:49:32.722340 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Aug 12 23:49:32.722346 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Aug 12 23:49:32.722352 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Aug 12 23:49:32.722358 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Aug 12 23:49:32.722363 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Aug 12 23:49:32.722369 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Aug 12 23:49:32.722375 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Aug 12 23:49:32.722381 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:49:32.722389 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:49:32.722395 kernel: psci: probing for conduit method from ACPI. Aug 12 23:49:32.722401 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:49:32.722406 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:49:32.722412 kernel: psci: Trusted OS migration not required Aug 12 23:49:32.722432 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:49:32.722439 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:49:32.722461 kernel: ACPI: SRAT not present Aug 12 23:49:32.722468 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Aug 12 23:49:32.722474 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Aug 12 23:49:32.722480 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:49:32.722486 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:49:32.722493 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:49:32.722499 kernel: CPU features: detected: Hardware dirty bit management Aug 12 23:49:32.722505 kernel: CPU features: detected: Spectre-v4 Aug 12 23:49:32.722511 kernel: CPU features: detected: Spectre-BHB Aug 12 23:49:32.722519 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:49:32.722525 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:49:32.722531 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:49:32.722537 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:49:32.722544 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 12 23:49:32.722550 kernel: Policy zone: DMA Aug 12 23:49:32.722557 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 12 23:49:32.722564 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:49:32.722570 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:49:32.722576 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:49:32.722582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:49:32.722590 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114952K reserved, 0K cma-reserved) Aug 12 23:49:32.722597 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:49:32.722603 kernel: trace event string verifier disabled Aug 12 23:49:32.722609 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:49:32.722616 kernel: rcu: RCU event tracing is enabled. Aug 12 23:49:32.722622 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:49:32.722629 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:49:32.722635 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:49:32.722641 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:49:32.722648 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:49:32.722654 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:49:32.722661 kernel: GICv3: 256 SPIs implemented Aug 12 23:49:32.722667 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:49:32.722673 kernel: GICv3: Distributor has no Range Selector support Aug 12 23:49:32.722679 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:49:32.722685 kernel: GICv3: 16 PPIs implemented Aug 12 23:49:32.722691 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:49:32.722697 kernel: ACPI: SRAT not present Aug 12 23:49:32.722705 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:49:32.722718 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:49:32.722724 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:49:32.722730 kernel: GICv3: using LPI property table @0x00000000400d0000 Aug 12 23:49:32.722737 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Aug 12 23:49:32.722745 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:49:32.722751 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:49:32.722757 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:49:32.722764 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:49:32.722770 kernel: arm-pv: using stolen time PV Aug 12 23:49:32.722776 kernel: Console: colour dummy device 80x25 Aug 12 23:49:32.722783 kernel: ACPI: Core revision 20210730 Aug 12 23:49:32.722789 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:49:32.722798 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:49:32.722804 kernel: LSM: Security Framework initializing Aug 12 23:49:32.722817 kernel: SELinux: Initializing. Aug 12 23:49:32.722826 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:49:32.722833 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:49:32.722839 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:49:32.722846 kernel: Platform MSI: ITS@0x8080000 domain created Aug 12 23:49:32.722852 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 12 23:49:32.722859 kernel: Remapping and enabling EFI services. Aug 12 23:49:32.722865 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:49:32.722871 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:49:32.722879 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:49:32.722886 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Aug 12 23:49:32.722892 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:49:32.722898 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:49:32.722904 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:49:32.722911 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:49:32.722917 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Aug 12 23:49:32.722924 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:49:32.722930 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:49:32.722936 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:49:32.722943 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:49:32.722950 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Aug 12 23:49:32.722956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:49:32.722962 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:49:32.722974 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:49:32.722984 kernel: SMP: Total of 4 processors activated. Aug 12 23:49:32.722996 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:49:32.723004 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:49:32.723010 kernel: CPU features: detected: Common not Private translations Aug 12 23:49:32.723017 kernel: CPU features: detected: CRC32 instructions Aug 12 23:49:32.723024 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:49:32.723031 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:49:32.723039 kernel: CPU features: detected: Privileged Access Never Aug 12 23:49:32.723046 kernel: CPU features: detected: RAS Extension Support Aug 12 23:49:32.723053 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:49:32.723060 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:49:32.723066 kernel: alternatives: patching kernel code Aug 12 23:49:32.723074 kernel: devtmpfs: initialized Aug 12 23:49:32.723081 kernel: KASLR enabled Aug 12 23:49:32.723088 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:49:32.723094 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:49:32.723101 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:49:32.723107 kernel: SMBIOS 3.0.0 present. Aug 12 23:49:32.723114 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Aug 12 23:49:32.723121 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:49:32.723127 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:49:32.723135 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:49:32.723143 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:49:32.723149 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:49:32.723156 kernel: audit: type=2000 audit(0.036:1): state=initialized audit_enabled=0 res=1 Aug 12 23:49:32.723163 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:49:32.723169 kernel: cpuidle: using governor menu Aug 12 23:49:32.723176 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:49:32.723183 kernel: ASID allocator initialised with 32768 entries Aug 12 23:49:32.723189 kernel: ACPI: bus type PCI registered Aug 12 23:49:32.723197 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:49:32.723203 kernel: Serial: AMBA PL011 UART driver Aug 12 23:49:32.723210 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:49:32.723216 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:49:32.723223 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:49:32.723230 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:49:32.723236 kernel: cryptd: max_cpu_qlen set to 1000 Aug 12 23:49:32.723242 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:49:32.723249 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:49:32.723257 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:49:32.723264 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:49:32.723270 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 12 23:49:32.723277 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 12 23:49:32.723283 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 12 23:49:32.723290 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:49:32.723296 kernel: ACPI: Interpreter enabled Aug 12 23:49:32.723303 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:49:32.723335 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:49:32.723345 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:49:32.723352 kernel: printk: console [ttyAMA0] enabled Aug 12 23:49:32.723358 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:49:32.723532 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:49:32.723596 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:49:32.723654 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:49:32.723710 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:49:32.723770 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:49:32.723779 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:49:32.723785 kernel: PCI host bridge to bus 0000:00 Aug 12 23:49:32.723858 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:49:32.723911 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:49:32.723963 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:49:32.724015 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:49:32.724086 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 12 23:49:32.724154 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:49:32.724218 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 12 23:49:32.724277 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 12 23:49:32.724374 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:49:32.724458 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:49:32.724521 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 12 23:49:32.724586 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 12 23:49:32.724640 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:49:32.724700 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:49:32.724752 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:49:32.724760 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:49:32.724767 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:49:32.724774 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:49:32.724783 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:49:32.724789 kernel: iommu: Default domain type: Translated Aug 12 23:49:32.724796 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:49:32.724803 kernel: vgaarb: loaded Aug 12 23:49:32.724809 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 12 23:49:32.724823 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 12 23:49:32.724830 kernel: PTP clock support registered Aug 12 23:49:32.724837 kernel: Registered efivars operations Aug 12 23:49:32.724843 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:49:32.724850 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:49:32.724860 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:49:32.724867 kernel: pnp: PnP ACPI init Aug 12 23:49:32.724939 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:49:32.724949 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:49:32.724955 kernel: NET: Registered PF_INET protocol family Aug 12 23:49:32.724962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:49:32.724969 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:49:32.724976 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:49:32.724984 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:49:32.724991 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 12 23:49:32.724997 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:49:32.725004 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:49:32.725011 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:49:32.725017 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:49:32.725024 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:49:32.725030 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 12 23:49:32.725037 kernel: kvm [1]: HYP mode not available Aug 12 23:49:32.725045 kernel: Initialise system trusted keyrings Aug 12 23:49:32.725051 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:49:32.725058 kernel: Key type asymmetric registered Aug 12 23:49:32.725064 kernel: Asymmetric key parser 'x509' registered Aug 12 23:49:32.725071 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 12 23:49:32.725091 kernel: io scheduler mq-deadline registered Aug 12 23:49:32.725098 kernel: io scheduler kyber registered Aug 12 23:49:32.725105 kernel: io scheduler bfq registered Aug 12 23:49:32.725111 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:49:32.725119 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:49:32.725126 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:49:32.725185 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:49:32.725194 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:49:32.725201 kernel: thunder_xcv, ver 1.0 Aug 12 23:49:32.725207 kernel: thunder_bgx, ver 1.0 Aug 12 23:49:32.725214 kernel: nicpf, ver 1.0 Aug 12 23:49:32.725220 kernel: nicvf, ver 1.0 Aug 12 23:49:32.725303 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:49:32.725407 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:49:32 UTC (1755042572) Aug 12 23:49:32.725419 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:49:32.725452 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:49:32.725459 kernel: Segment Routing with IPv6 Aug 12 23:49:32.725465 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:49:32.725472 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:49:32.725478 kernel: Key type dns_resolver registered Aug 12 23:49:32.725485 kernel: registered taskstats version 1 Aug 12 23:49:32.725495 kernel: Loading compiled-in X.509 certificates Aug 12 23:49:32.725502 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 72b807ae6dac6ab18c2f4ab9460d3472cf28c19d' Aug 12 23:49:32.725509 kernel: Key type .fscrypt registered Aug 12 23:49:32.725515 kernel: Key type fscrypt-provisioning registered Aug 12 23:49:32.725522 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:49:32.725529 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:49:32.725535 kernel: ima: No architecture policies found Aug 12 23:49:32.725542 kernel: clk: Disabling unused clocks Aug 12 23:49:32.725548 kernel: Freeing unused kernel memory: 36416K Aug 12 23:49:32.725556 kernel: Run /init as init process Aug 12 23:49:32.725563 kernel: with arguments: Aug 12 23:49:32.725569 kernel: /init Aug 12 23:49:32.725575 kernel: with environment: Aug 12 23:49:32.725582 kernel: HOME=/ Aug 12 23:49:32.725588 kernel: TERM=linux Aug 12 23:49:32.725594 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:49:32.725603 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 12 23:49:32.725613 systemd[1]: Detected virtualization kvm. Aug 12 23:49:32.725620 systemd[1]: Detected architecture arm64. Aug 12 23:49:32.725627 systemd[1]: Running in initrd. Aug 12 23:49:32.725634 systemd[1]: No hostname configured, using default hostname. Aug 12 23:49:32.725641 systemd[1]: Hostname set to . Aug 12 23:49:32.725648 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:49:32.725655 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:49:32.725662 systemd[1]: Started systemd-ask-password-console.path. Aug 12 23:49:32.725670 systemd[1]: Reached target cryptsetup.target. Aug 12 23:49:32.725677 systemd[1]: Reached target paths.target. Aug 12 23:49:32.725684 systemd[1]: Reached target slices.target. Aug 12 23:49:32.725691 systemd[1]: Reached target swap.target. Aug 12 23:49:32.725698 systemd[1]: Reached target timers.target. Aug 12 23:49:32.725705 systemd[1]: Listening on iscsid.socket. Aug 12 23:49:32.725712 systemd[1]: Listening on iscsiuio.socket. Aug 12 23:49:32.725721 systemd[1]: Listening on systemd-journald-audit.socket. Aug 12 23:49:32.725728 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 12 23:49:32.725735 systemd[1]: Listening on systemd-journald.socket. Aug 12 23:49:32.725742 systemd[1]: Listening on systemd-networkd.socket. Aug 12 23:49:32.725749 systemd[1]: Listening on systemd-udevd-control.socket. Aug 12 23:49:32.725756 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 12 23:49:32.725763 systemd[1]: Reached target sockets.target. Aug 12 23:49:32.725770 systemd[1]: Starting kmod-static-nodes.service... Aug 12 23:49:32.725777 systemd[1]: Finished network-cleanup.service. Aug 12 23:49:32.725786 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:49:32.725793 systemd[1]: Starting systemd-journald.service... Aug 12 23:49:32.725800 systemd[1]: Starting systemd-modules-load.service... Aug 12 23:49:32.725807 systemd[1]: Starting systemd-resolved.service... Aug 12 23:49:32.725820 systemd[1]: Starting systemd-vconsole-setup.service... Aug 12 23:49:32.725828 systemd[1]: Finished kmod-static-nodes.service. Aug 12 23:49:32.725835 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:49:32.725842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 12 23:49:32.725849 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 12 23:49:32.725861 systemd-journald[290]: Journal started Aug 12 23:49:32.725906 systemd-journald[290]: Runtime Journal (/run/log/journal/d6eadc4d66104ff196f0e66fe05c3468) is 6.0M, max 48.7M, 42.6M free. Aug 12 23:49:32.720875 systemd-modules-load[291]: Inserted module 'overlay' Aug 12 23:49:32.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.730444 kernel: audit: type=1130 audit(1755042572.727:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.730475 systemd[1]: Started systemd-journald.service. Aug 12 23:49:32.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.731198 systemd[1]: Finished systemd-vconsole-setup.service. Aug 12 23:49:32.734083 kernel: audit: type=1130 audit(1755042572.730:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.737511 kernel: audit: type=1130 audit(1755042572.734:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.737546 systemd[1]: Starting dracut-cmdline-ask.service... Aug 12 23:49:32.744997 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:49:32.747999 systemd-modules-load[291]: Inserted module 'br_netfilter' Aug 12 23:49:32.748719 kernel: Bridge firewalling registered Aug 12 23:49:32.749327 systemd-resolved[292]: Positive Trust Anchors: Aug 12 23:49:32.749340 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:49:32.749367 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 12 23:49:32.753510 systemd-resolved[292]: Defaulting to hostname 'linux'. Aug 12 23:49:32.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.754317 systemd[1]: Started systemd-resolved.service. Aug 12 23:49:32.759386 kernel: audit: type=1130 audit(1755042572.755:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.758230 systemd[1]: Reached target nss-lookup.target. Aug 12 23:49:32.760444 kernel: SCSI subsystem initialized Aug 12 23:49:32.763090 systemd[1]: Finished dracut-cmdline-ask.service. Aug 12 23:49:32.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.766451 kernel: audit: type=1130 audit(1755042572.763:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.766990 systemd[1]: Starting dracut-cmdline.service... Aug 12 23:49:32.770054 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:49:32.770070 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:49:32.770079 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 12 23:49:32.772197 systemd-modules-load[291]: Inserted module 'dm_multipath' Aug 12 23:49:32.772952 systemd[1]: Finished systemd-modules-load.service. Aug 12 23:49:32.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.776129 systemd[1]: Starting systemd-sysctl.service... Aug 12 23:49:32.777277 kernel: audit: type=1130 audit(1755042572.773:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.777738 dracut-cmdline[309]: dracut-dracut-053 Aug 12 23:49:32.780073 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 12 23:49:32.785458 systemd[1]: Finished systemd-sysctl.service. Aug 12 23:49:32.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.789448 kernel: audit: type=1130 audit(1755042572.785:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.841463 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:49:32.857474 kernel: iscsi: registered transport (tcp) Aug 12 23:49:32.872631 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:49:32.872710 kernel: QLogic iSCSI HBA Driver Aug 12 23:49:32.916504 systemd[1]: Finished dracut-cmdline.service. Aug 12 23:49:32.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.919448 kernel: audit: type=1130 audit(1755042572.916:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:32.918149 systemd[1]: Starting dracut-pre-udev.service... Aug 12 23:49:32.968453 kernel: raid6: neonx8 gen() 13724 MB/s Aug 12 23:49:32.985455 kernel: raid6: neonx8 xor() 10767 MB/s Aug 12 23:49:33.002440 kernel: raid6: neonx4 gen() 13554 MB/s Aug 12 23:49:33.019448 kernel: raid6: neonx4 xor() 11151 MB/s Aug 12 23:49:33.036462 kernel: raid6: neonx2 gen() 12949 MB/s Aug 12 23:49:33.053451 kernel: raid6: neonx2 xor() 10330 MB/s Aug 12 23:49:33.070454 kernel: raid6: neonx1 gen() 10533 MB/s Aug 12 23:49:33.087451 kernel: raid6: neonx1 xor() 8750 MB/s Aug 12 23:49:33.104456 kernel: raid6: int64x8 gen() 6257 MB/s Aug 12 23:49:33.121470 kernel: raid6: int64x8 xor() 3539 MB/s Aug 12 23:49:33.138475 kernel: raid6: int64x4 gen() 7209 MB/s Aug 12 23:49:33.155473 kernel: raid6: int64x4 xor() 3852 MB/s Aug 12 23:49:33.172463 kernel: raid6: int64x2 gen() 6149 MB/s Aug 12 23:49:33.190956 kernel: raid6: int64x2 xor() 3320 MB/s Aug 12 23:49:33.207313 kernel: raid6: int64x1 gen() 5040 MB/s Aug 12 23:49:33.223725 kernel: raid6: int64x1 xor() 2644 MB/s Aug 12 23:49:33.223787 kernel: raid6: using algorithm neonx8 gen() 13724 MB/s Aug 12 23:49:33.223797 kernel: raid6: .... xor() 10767 MB/s, rmw enabled Aug 12 23:49:33.223806 kernel: raid6: using neon recovery algorithm Aug 12 23:49:33.239953 kernel: xor: measuring software checksum speed Aug 12 23:49:33.240019 kernel: 8regs : 17260 MB/sec Aug 12 23:49:33.240029 kernel: 32regs : 20717 MB/sec Aug 12 23:49:33.240038 kernel: arm64_neon : 27691 MB/sec Aug 12 23:49:33.240054 kernel: xor: using function: arm64_neon (27691 MB/sec) Aug 12 23:49:33.300476 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Aug 12 23:49:33.313819 systemd[1]: Finished dracut-pre-udev.service. Aug 12 23:49:33.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:33.316000 audit: BPF prog-id=7 op=LOAD Aug 12 23:49:33.316000 audit: BPF prog-id=8 op=LOAD Aug 12 23:49:33.317298 systemd[1]: Starting systemd-udevd.service... Aug 12 23:49:33.318395 kernel: audit: type=1130 audit(1755042573.314:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:33.332096 systemd-udevd[492]: Using default interface naming scheme 'v252'. Aug 12 23:49:33.335361 systemd[1]: Started systemd-udevd.service. Aug 12 23:49:33.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:33.337261 systemd[1]: Starting dracut-pre-trigger.service... Aug 12 23:49:33.351814 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Aug 12 23:49:33.383900 systemd[1]: Finished dracut-pre-trigger.service. Aug 12 23:49:33.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:33.385476 systemd[1]: Starting systemd-udev-trigger.service... Aug 12 23:49:33.429504 systemd[1]: Finished systemd-udev-trigger.service. Aug 12 23:49:33.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:33.465889 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:49:33.470224 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:49:33.470239 kernel: GPT:9289727 != 19775487 Aug 12 23:49:33.470248 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:49:33.470256 kernel: GPT:9289727 != 19775487 Aug 12 23:49:33.470265 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:49:33.470273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:49:33.483121 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 12 23:49:33.486090 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 12 23:49:33.488540 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (546) Aug 12 23:49:33.487879 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 12 23:49:33.491784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 12 23:49:33.501352 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 12 23:49:33.502960 systemd[1]: Starting disk-uuid.service... Aug 12 23:49:33.509261 disk-uuid[563]: Primary Header is updated. Aug 12 23:49:33.509261 disk-uuid[563]: Secondary Entries is updated. Aug 12 23:49:33.509261 disk-uuid[563]: Secondary Header is updated. Aug 12 23:49:33.512442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:49:34.525457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:49:34.525802 disk-uuid[564]: The operation has completed successfully. Aug 12 23:49:34.548650 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:49:34.548746 systemd[1]: Finished disk-uuid.service. Aug 12 23:49:34.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.550276 systemd[1]: Starting verity-setup.service... Aug 12 23:49:34.563446 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 12 23:49:34.586174 systemd[1]: Found device dev-mapper-usr.device. Aug 12 23:49:34.593076 systemd[1]: Mounting sysusr-usr.mount... Aug 12 23:49:34.593846 systemd[1]: Finished verity-setup.service. Aug 12 23:49:34.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.646447 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 12 23:49:34.646542 systemd[1]: Mounted sysusr-usr.mount. Aug 12 23:49:34.647253 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 12 23:49:34.648086 systemd[1]: Starting ignition-setup.service... Aug 12 23:49:34.649804 systemd[1]: Starting parse-ip-for-networkd.service... Aug 12 23:49:34.656896 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:49:34.656952 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:49:34.656968 kernel: BTRFS info (device vda6): has skinny extents Aug 12 23:49:34.665896 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 12 23:49:34.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.673520 systemd[1]: Finished ignition-setup.service. Aug 12 23:49:34.675044 systemd[1]: Starting ignition-fetch-offline.service... Aug 12 23:49:34.741633 systemd[1]: Finished parse-ip-for-networkd.service. Aug 12 23:49:34.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.745000 audit: BPF prog-id=9 op=LOAD Aug 12 23:49:34.746951 systemd[1]: Starting systemd-networkd.service... Aug 12 23:49:34.759287 ignition[653]: Ignition 2.14.0 Aug 12 23:49:34.759299 ignition[653]: Stage: fetch-offline Aug 12 23:49:34.759339 ignition[653]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:49:34.759348 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:49:34.759513 ignition[653]: parsed url from cmdline: "" Aug 12 23:49:34.759516 ignition[653]: no config URL provided Aug 12 23:49:34.759521 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:49:34.759529 ignition[653]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:49:34.759548 ignition[653]: op(1): [started] loading QEMU firmware config module Aug 12 23:49:34.759552 ignition[653]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:49:34.771775 ignition[653]: op(1): [finished] loading QEMU firmware config module Aug 12 23:49:34.788379 systemd-networkd[738]: lo: Link UP Aug 12 23:49:34.788392 systemd-networkd[738]: lo: Gained carrier Aug 12 23:49:34.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.788816 systemd-networkd[738]: Enumeration completed Aug 12 23:49:34.788922 systemd[1]: Started systemd-networkd.service. Aug 12 23:49:34.789023 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:49:34.790092 systemd[1]: Reached target network.target. Aug 12 23:49:34.790157 systemd-networkd[738]: eth0: Link UP Aug 12 23:49:34.790160 systemd-networkd[738]: eth0: Gained carrier Aug 12 23:49:34.791960 systemd[1]: Starting iscsiuio.service... Aug 12 23:49:34.801115 systemd[1]: Started iscsiuio.service. Aug 12 23:49:34.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.802897 systemd[1]: Starting iscsid.service... Aug 12 23:49:34.806688 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 12 23:49:34.806688 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 12 23:49:34.806688 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 12 23:49:34.806688 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 12 23:49:34.806688 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 12 23:49:34.806688 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 12 23:49:34.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.811279 systemd[1]: Started iscsid.service. Aug 12 23:49:34.812908 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:49:34.814479 systemd[1]: Starting dracut-initqueue.service... Aug 12 23:49:34.825720 systemd[1]: Finished dracut-initqueue.service. Aug 12 23:49:34.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.826605 systemd[1]: Reached target remote-fs-pre.target. Aug 12 23:49:34.827819 systemd[1]: Reached target remote-cryptsetup.target. Aug 12 23:49:34.828092 ignition[653]: parsing config with SHA512: b5aea375f942c38fec4ea527f13f50a018b0ef80697bf74c8efa6285b582eb8744a242f930aa0d7fd7b1bd22462111517c47cf224544bd437181c12dd4b373ba Aug 12 23:49:34.828941 systemd[1]: Reached target remote-fs.target. Aug 12 23:49:34.831200 systemd[1]: Starting dracut-pre-mount.service... Aug 12 23:49:34.837618 unknown[653]: fetched base config from "system" Aug 12 23:49:34.838236 ignition[653]: fetch-offline: fetch-offline passed Aug 12 23:49:34.837629 unknown[653]: fetched user config from "qemu" Aug 12 23:49:34.838321 ignition[653]: Ignition finished successfully Aug 12 23:49:34.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.839765 systemd[1]: Finished ignition-fetch-offline.service. Aug 12 23:49:34.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.841123 systemd[1]: Finished dracut-pre-mount.service. Aug 12 23:49:34.842077 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:49:34.842929 systemd[1]: Starting ignition-kargs.service... Aug 12 23:49:34.852844 ignition[760]: Ignition 2.14.0 Aug 12 23:49:34.852855 ignition[760]: Stage: kargs Aug 12 23:49:34.852965 ignition[760]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:49:34.852976 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:49:34.853992 ignition[760]: kargs: kargs passed Aug 12 23:49:34.854041 ignition[760]: Ignition finished successfully Aug 12 23:49:34.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.855673 systemd[1]: Finished ignition-kargs.service. Aug 12 23:49:34.857412 systemd[1]: Starting ignition-disks.service... Aug 12 23:49:34.864552 ignition[766]: Ignition 2.14.0 Aug 12 23:49:34.864563 ignition[766]: Stage: disks Aug 12 23:49:34.864668 ignition[766]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:49:34.864678 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:49:34.865641 ignition[766]: disks: disks passed Aug 12 23:49:34.865690 ignition[766]: Ignition finished successfully Aug 12 23:49:34.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.867772 systemd[1]: Finished ignition-disks.service. Aug 12 23:49:34.868722 systemd[1]: Reached target initrd-root-device.target. Aug 12 23:49:34.869647 systemd[1]: Reached target local-fs-pre.target. Aug 12 23:49:34.870656 systemd[1]: Reached target local-fs.target. Aug 12 23:49:34.871656 systemd[1]: Reached target sysinit.target. Aug 12 23:49:34.872620 systemd[1]: Reached target basic.target. Aug 12 23:49:34.874533 systemd[1]: Starting systemd-fsck-root.service... Aug 12 23:49:34.886336 systemd-fsck[774]: ROOT: clean, 629/553520 files, 56026/553472 blocks Aug 12 23:49:34.889779 systemd[1]: Finished systemd-fsck-root.service. Aug 12 23:49:34.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.892205 systemd[1]: Mounting sysroot.mount... Aug 12 23:49:34.898439 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 12 23:49:34.899200 systemd[1]: Mounted sysroot.mount. Aug 12 23:49:34.899844 systemd[1]: Reached target initrd-root-fs.target. Aug 12 23:49:34.901782 systemd[1]: Mounting sysroot-usr.mount... Aug 12 23:49:34.902541 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 12 23:49:34.902586 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:49:34.902610 systemd[1]: Reached target ignition-diskful.target. Aug 12 23:49:34.904850 systemd[1]: Mounted sysroot-usr.mount. Aug 12 23:49:34.906871 systemd[1]: Starting initrd-setup-root.service... Aug 12 23:49:34.911663 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:49:34.916362 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:49:34.920379 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:49:34.924535 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:49:34.954312 systemd[1]: Finished initrd-setup-root.service. Aug 12 23:49:34.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.955821 systemd[1]: Starting ignition-mount.service... Aug 12 23:49:34.957021 systemd[1]: Starting sysroot-boot.service... Aug 12 23:49:34.961740 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Aug 12 23:49:34.971144 ignition[827]: INFO : Ignition 2.14.0 Aug 12 23:49:34.971144 ignition[827]: INFO : Stage: mount Aug 12 23:49:34.972472 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:49:34.972472 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:49:34.974598 ignition[827]: INFO : mount: mount passed Aug 12 23:49:34.974598 ignition[827]: INFO : Ignition finished successfully Aug 12 23:49:34.975312 systemd[1]: Finished ignition-mount.service. Aug 12 23:49:34.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:34.978119 systemd[1]: Finished sysroot-boot.service. Aug 12 23:49:34.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:35.601536 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 12 23:49:35.607452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (835) Aug 12 23:49:35.608754 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:49:35.608768 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:49:35.608778 kernel: BTRFS info (device vda6): has skinny extents Aug 12 23:49:35.611994 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 12 23:49:35.613418 systemd[1]: Starting ignition-files.service... Aug 12 23:49:35.628751 ignition[855]: INFO : Ignition 2.14.0 Aug 12 23:49:35.628751 ignition[855]: INFO : Stage: files Aug 12 23:49:35.630047 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:49:35.630047 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:49:35.630047 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:49:35.632677 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:49:35.632677 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:49:35.634502 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:49:35.634502 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:49:35.636411 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:49:35.636411 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:49:35.636411 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 12 23:49:35.634778 unknown[855]: wrote ssh authorized keys file for user: core Aug 12 23:49:35.757034 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:49:35.838181 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:49:35.839686 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:49:35.839686 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 12 23:49:36.072744 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 12 23:49:36.198546 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:49:36.200172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 12 23:49:36.480872 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 12 23:49:36.602534 systemd-networkd[738]: eth0: Gained IPv6LL Aug 12 23:49:37.006616 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:49:37.006616 ignition[855]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:49:37.009510 ignition[855]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:49:37.047560 ignition[855]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:49:37.048769 ignition[855]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:49:37.048769 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:49:37.048769 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:49:37.048769 ignition[855]: INFO : files: files passed Aug 12 23:49:37.048769 ignition[855]: INFO : Ignition finished successfully Aug 12 23:49:37.057453 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 12 23:49:37.057476 kernel: audit: type=1130 audit(1755042577.050:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.049094 systemd[1]: Finished ignition-files.service. Aug 12 23:49:37.051553 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 12 23:49:37.059216 initrd-setup-root-after-ignition[878]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 12 23:49:37.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.054720 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 12 23:49:37.067790 kernel: audit: type=1130 audit(1755042577.059:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.067826 kernel: audit: type=1130 audit(1755042577.062:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.067838 kernel: audit: type=1131 audit(1755042577.062:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.067933 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:49:37.055499 systemd[1]: Starting ignition-quench.service... Aug 12 23:49:37.058671 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 12 23:49:37.060077 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:49:37.060157 systemd[1]: Finished ignition-quench.service. Aug 12 23:49:37.063365 systemd[1]: Reached target ignition-complete.target. Aug 12 23:49:37.069189 systemd[1]: Starting initrd-parse-etc.service... Aug 12 23:49:37.082987 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:49:37.083091 systemd[1]: Finished initrd-parse-etc.service. Aug 12 23:49:37.088268 kernel: audit: type=1130 audit(1755042577.083:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.088290 kernel: audit: type=1131 audit(1755042577.083:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.084468 systemd[1]: Reached target initrd-fs.target. Aug 12 23:49:37.088871 systemd[1]: Reached target initrd.target. Aug 12 23:49:37.089872 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 12 23:49:37.090741 systemd[1]: Starting dracut-pre-pivot.service... Aug 12 23:49:37.101492 systemd[1]: Finished dracut-pre-pivot.service. Aug 12 23:49:37.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.104434 kernel: audit: type=1130 audit(1755042577.101:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.102981 systemd[1]: Starting initrd-cleanup.service... Aug 12 23:49:37.111247 systemd[1]: Stopped target nss-lookup.target. Aug 12 23:49:37.111968 systemd[1]: Stopped target remote-cryptsetup.target. Aug 12 23:49:37.113090 systemd[1]: Stopped target timers.target. Aug 12 23:49:37.114081 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:49:37.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.114198 systemd[1]: Stopped dracut-pre-pivot.service. Aug 12 23:49:37.118322 kernel: audit: type=1131 audit(1755042577.114:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.115169 systemd[1]: Stopped target initrd.target. Aug 12 23:49:37.117937 systemd[1]: Stopped target basic.target. Aug 12 23:49:37.118885 systemd[1]: Stopped target ignition-complete.target. Aug 12 23:49:37.119891 systemd[1]: Stopped target ignition-diskful.target. Aug 12 23:49:37.120903 systemd[1]: Stopped target initrd-root-device.target. Aug 12 23:49:37.122041 systemd[1]: Stopped target remote-fs.target. Aug 12 23:49:37.123039 systemd[1]: Stopped target remote-fs-pre.target. Aug 12 23:49:37.124090 systemd[1]: Stopped target sysinit.target. Aug 12 23:49:37.125013 systemd[1]: Stopped target local-fs.target. Aug 12 23:49:37.126024 systemd[1]: Stopped target local-fs-pre.target. Aug 12 23:49:37.127032 systemd[1]: Stopped target swap.target. Aug 12 23:49:37.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.127979 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:49:37.132168 kernel: audit: type=1131 audit(1755042577.128:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.128099 systemd[1]: Stopped dracut-pre-mount.service. Aug 12 23:49:37.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.129111 systemd[1]: Stopped target cryptsetup.target. Aug 12 23:49:37.136009 kernel: audit: type=1131 audit(1755042577.132:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.131646 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:49:37.131753 systemd[1]: Stopped dracut-initqueue.service. Aug 12 23:49:37.132870 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:49:37.132966 systemd[1]: Stopped ignition-fetch-offline.service. Aug 12 23:49:37.135576 systemd[1]: Stopped target paths.target. Aug 12 23:49:37.136540 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:49:37.140477 systemd[1]: Stopped systemd-ask-password-console.path. Aug 12 23:49:37.141764 systemd[1]: Stopped target slices.target. Aug 12 23:49:37.142348 systemd[1]: Stopped target sockets.target. Aug 12 23:49:37.143278 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:49:37.143354 systemd[1]: Closed iscsid.socket. Aug 12 23:49:37.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.144184 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:49:37.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.144288 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 12 23:49:37.145310 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:49:37.145397 systemd[1]: Stopped ignition-files.service. Aug 12 23:49:37.147075 systemd[1]: Stopping ignition-mount.service... Aug 12 23:49:37.148103 systemd[1]: Stopping iscsiuio.service... Aug 12 23:49:37.148969 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:49:37.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.149085 systemd[1]: Stopped kmod-static-nodes.service. Aug 12 23:49:37.150975 systemd[1]: Stopping sysroot-boot.service... Aug 12 23:49:37.151600 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:49:37.151724 systemd[1]: Stopped systemd-udev-trigger.service. Aug 12 23:49:37.152966 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:49:37.153053 systemd[1]: Stopped dracut-pre-trigger.service. Aug 12 23:49:37.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.155402 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 12 23:49:37.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.159012 ignition[895]: INFO : Ignition 2.14.0 Aug 12 23:49:37.159012 ignition[895]: INFO : Stage: umount Aug 12 23:49:37.155516 systemd[1]: Stopped iscsiuio.service. Aug 12 23:49:37.160922 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:49:37.160922 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:49:37.160922 ignition[895]: INFO : umount: umount passed Aug 12 23:49:37.160922 ignition[895]: INFO : Ignition finished successfully Aug 12 23:49:37.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.156571 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:49:37.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.156635 systemd[1]: Closed iscsiuio.socket. Aug 12 23:49:37.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.157650 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:49:37.157721 systemd[1]: Finished initrd-cleanup.service. Aug 12 23:49:37.161309 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:49:37.161390 systemd[1]: Stopped ignition-mount.service. Aug 12 23:49:37.162335 systemd[1]: Stopped target network.target. Aug 12 23:49:37.163753 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:49:37.163816 systemd[1]: Stopped ignition-disks.service. Aug 12 23:49:37.164879 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:49:37.164915 systemd[1]: Stopped ignition-kargs.service. Aug 12 23:49:37.166018 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:49:37.166053 systemd[1]: Stopped ignition-setup.service. Aug 12 23:49:37.167243 systemd[1]: Stopping systemd-networkd.service... Aug 12 23:49:37.168326 systemd[1]: Stopping systemd-resolved.service... Aug 12 23:49:37.170656 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:49:37.176525 systemd-networkd[738]: eth0: DHCPv6 lease lost Aug 12 23:49:37.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.177795 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:49:37.177909 systemd[1]: Stopped systemd-networkd.service. Aug 12 23:49:37.179060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:49:37.179088 systemd[1]: Closed systemd-networkd.socket. Aug 12 23:49:37.180933 systemd[1]: Stopping network-cleanup.service... Aug 12 23:49:37.184000 audit: BPF prog-id=9 op=UNLOAD Aug 12 23:49:37.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.184119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:49:37.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.184189 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 12 23:49:37.186400 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:49:37.186462 systemd[1]: Stopped systemd-sysctl.service. Aug 12 23:49:37.187758 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:49:37.187812 systemd[1]: Stopped systemd-modules-load.service. Aug 12 23:49:37.191644 systemd[1]: Stopping systemd-udevd.service... Aug 12 23:49:37.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.193601 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:49:37.194095 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:49:37.194194 systemd[1]: Stopped systemd-resolved.service. Aug 12 23:49:37.198079 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:49:37.198216 systemd[1]: Stopped systemd-udevd.service. Aug 12 23:49:37.199000 audit: BPF prog-id=6 op=UNLOAD Aug 12 23:49:37.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.200660 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:49:37.200764 systemd[1]: Stopped network-cleanup.service. Aug 12 23:49:37.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.202457 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:49:37.202496 systemd[1]: Closed systemd-udevd-control.socket. Aug 12 23:49:37.203458 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:49:37.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.203488 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 12 23:49:37.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.204856 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:49:37.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.204902 systemd[1]: Stopped dracut-pre-udev.service. Aug 12 23:49:37.206051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:49:37.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.206089 systemd[1]: Stopped dracut-cmdline.service. Aug 12 23:49:37.207711 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:49:37.207750 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 12 23:49:37.209700 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 12 23:49:37.210326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:49:37.210383 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 12 23:49:37.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.216127 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:49:37.216224 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 12 23:49:37.223595 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:49:37.223693 systemd[1]: Stopped sysroot-boot.service. Aug 12 23:49:37.225141 systemd[1]: Reached target initrd-switch-root.target. Aug 12 23:49:37.226156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:49:37.226209 systemd[1]: Stopped initrd-setup-root.service. Aug 12 23:49:37.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:37.228259 systemd[1]: Starting initrd-switch-root.service... Aug 12 23:49:37.235035 systemd[1]: Switching root. Aug 12 23:49:37.249059 iscsid[746]: iscsid shutting down. Aug 12 23:49:37.249754 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Aug 12 23:49:37.249813 systemd-journald[290]: Journal stopped Aug 12 23:49:39.340597 kernel: SELinux: Class mctp_socket not defined in policy. Aug 12 23:49:39.340656 kernel: SELinux: Class anon_inode not defined in policy. Aug 12 23:49:39.340668 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 12 23:49:39.340677 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:49:39.340687 kernel: SELinux: policy capability open_perms=1 Aug 12 23:49:39.340700 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:49:39.340709 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:49:39.340723 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:49:39.340733 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:49:39.340743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:49:39.340752 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:49:39.340764 systemd[1]: Successfully loaded SELinux policy in 32.821ms. Aug 12 23:49:39.340788 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.034ms. Aug 12 23:49:39.340808 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 12 23:49:39.340824 systemd[1]: Detected virtualization kvm. Aug 12 23:49:39.340834 systemd[1]: Detected architecture arm64. Aug 12 23:49:39.340846 systemd[1]: Detected first boot. Aug 12 23:49:39.340856 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:49:39.340867 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 12 23:49:39.340877 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:49:39.340889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:49:39.340903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:49:39.340917 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:49:39.340931 systemd[1]: iscsid.service: Deactivated successfully. Aug 12 23:49:39.340943 systemd[1]: Stopped iscsid.service. Aug 12 23:49:39.340954 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:49:39.340965 systemd[1]: Stopped initrd-switch-root.service. Aug 12 23:49:39.340975 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:49:39.340986 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 12 23:49:39.340997 systemd[1]: Created slice system-addon\x2drun.slice. Aug 12 23:49:39.341007 systemd[1]: Created slice system-getty.slice. Aug 12 23:49:39.341019 systemd[1]: Created slice system-modprobe.slice. Aug 12 23:49:39.341029 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 12 23:49:39.341041 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 12 23:49:39.341053 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 12 23:49:39.341063 systemd[1]: Created slice user.slice. Aug 12 23:49:39.341074 systemd[1]: Started systemd-ask-password-console.path. Aug 12 23:49:39.341085 systemd[1]: Started systemd-ask-password-wall.path. Aug 12 23:49:39.341096 systemd[1]: Set up automount boot.automount. Aug 12 23:49:39.341107 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 12 23:49:39.341117 systemd[1]: Stopped target initrd-switch-root.target. Aug 12 23:49:39.341130 systemd[1]: Stopped target initrd-fs.target. Aug 12 23:49:39.341145 systemd[1]: Stopped target initrd-root-fs.target. Aug 12 23:49:39.341157 systemd[1]: Reached target integritysetup.target. Aug 12 23:49:39.341168 systemd[1]: Reached target remote-cryptsetup.target. Aug 12 23:49:39.341184 systemd[1]: Reached target remote-fs.target. Aug 12 23:49:39.341195 systemd[1]: Reached target slices.target. Aug 12 23:49:39.341205 systemd[1]: Reached target swap.target. Aug 12 23:49:39.341216 systemd[1]: Reached target torcx.target. Aug 12 23:49:39.341226 systemd[1]: Reached target veritysetup.target. Aug 12 23:49:39.341237 systemd[1]: Listening on systemd-coredump.socket. Aug 12 23:49:39.341250 systemd[1]: Listening on systemd-initctl.socket. Aug 12 23:49:39.341260 systemd[1]: Listening on systemd-networkd.socket. Aug 12 23:49:39.341271 systemd[1]: Listening on systemd-udevd-control.socket. Aug 12 23:49:39.341281 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 12 23:49:39.341292 systemd[1]: Listening on systemd-userdbd.socket. Aug 12 23:49:39.341302 systemd[1]: Mounting dev-hugepages.mount... Aug 12 23:49:39.341313 systemd[1]: Mounting dev-mqueue.mount... Aug 12 23:49:39.341324 systemd[1]: Mounting media.mount... Aug 12 23:49:39.341334 systemd[1]: Mounting sys-kernel-debug.mount... Aug 12 23:49:39.341346 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 12 23:49:39.341356 systemd[1]: Mounting tmp.mount... Aug 12 23:49:39.341367 systemd[1]: Starting flatcar-tmpfiles.service... Aug 12 23:49:39.341378 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:49:39.341390 systemd[1]: Starting kmod-static-nodes.service... Aug 12 23:49:39.341401 systemd[1]: Starting modprobe@configfs.service... Aug 12 23:49:39.341412 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:49:39.341454 systemd[1]: Starting modprobe@drm.service... Aug 12 23:49:39.341467 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:49:39.341480 systemd[1]: Starting modprobe@fuse.service... Aug 12 23:49:39.341491 systemd[1]: Starting modprobe@loop.service... Aug 12 23:49:39.341502 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:49:39.341513 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:49:39.341524 systemd[1]: Stopped systemd-fsck-root.service. Aug 12 23:49:39.341535 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:49:39.341547 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:49:39.341557 kernel: loop: module loaded Aug 12 23:49:39.341569 systemd[1]: Stopped systemd-journald.service. Aug 12 23:49:39.341581 kernel: fuse: init (API version 7.34) Aug 12 23:49:39.341592 systemd[1]: Starting systemd-journald.service... Aug 12 23:49:39.341603 systemd[1]: Starting systemd-modules-load.service... Aug 12 23:49:39.341615 systemd[1]: Starting systemd-network-generator.service... Aug 12 23:49:39.341625 systemd[1]: Starting systemd-remount-fs.service... Aug 12 23:49:39.341636 systemd[1]: Starting systemd-udev-trigger.service... Aug 12 23:49:39.341647 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:49:39.341658 systemd[1]: Stopped verity-setup.service. Aug 12 23:49:39.341669 systemd[1]: Mounted dev-hugepages.mount. Aug 12 23:49:39.341679 systemd[1]: Mounted dev-mqueue.mount. Aug 12 23:49:39.341690 systemd[1]: Mounted media.mount. Aug 12 23:49:39.341702 systemd[1]: Mounted sys-kernel-debug.mount. Aug 12 23:49:39.341714 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 12 23:49:39.341725 systemd[1]: Mounted tmp.mount. Aug 12 23:49:39.341735 systemd[1]: Finished kmod-static-nodes.service. Aug 12 23:49:39.341749 systemd-journald[998]: Journal started Aug 12 23:49:39.341801 systemd-journald[998]: Runtime Journal (/run/log/journal/d6eadc4d66104ff196f0e66fe05c3468) is 6.0M, max 48.7M, 42.6M free. Aug 12 23:49:37.310000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:49:37.420000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 12 23:49:37.420000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 12 23:49:37.420000 audit: BPF prog-id=10 op=LOAD Aug 12 23:49:37.420000 audit: BPF prog-id=10 op=UNLOAD Aug 12 23:49:37.420000 audit: BPF prog-id=11 op=LOAD Aug 12 23:49:37.420000 audit: BPF prog-id=11 op=UNLOAD Aug 12 23:49:37.463000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 12 23:49:37.463000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58ac a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=910 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:49:37.463000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 12 23:49:37.464000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 12 23:49:37.464000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5985 a2=1ed a3=0 items=2 ppid=910 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:49:37.464000 audit: CWD cwd="/" Aug 12 23:49:37.464000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:49:37.464000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 12 23:49:37.464000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 12 23:49:39.215000 audit: BPF prog-id=12 op=LOAD Aug 12 23:49:39.215000 audit: BPF prog-id=3 op=UNLOAD Aug 12 23:49:39.216000 audit: BPF prog-id=13 op=LOAD Aug 12 23:49:39.216000 audit: BPF prog-id=14 op=LOAD Aug 12 23:49:39.216000 audit: BPF prog-id=4 op=UNLOAD Aug 12 23:49:39.216000 audit: BPF prog-id=5 op=UNLOAD Aug 12 23:49:39.216000 audit: BPF prog-id=15 op=LOAD Aug 12 23:49:39.216000 audit: BPF prog-id=12 op=UNLOAD Aug 12 23:49:39.216000 audit: BPF prog-id=16 op=LOAD Aug 12 23:49:39.216000 audit: BPF prog-id=17 op=LOAD Aug 12 23:49:39.216000 audit: BPF prog-id=13 op=UNLOAD Aug 12 23:49:39.216000 audit: BPF prog-id=14 op=UNLOAD Aug 12 23:49:39.218000 audit: BPF prog-id=18 op=LOAD Aug 12 23:49:39.218000 audit: BPF prog-id=15 op=UNLOAD Aug 12 23:49:39.218000 audit: BPF prog-id=19 op=LOAD Aug 12 23:49:39.218000 audit: BPF prog-id=20 op=LOAD Aug 12 23:49:39.218000 audit: BPF prog-id=16 op=UNLOAD Aug 12 23:49:39.218000 audit: BPF prog-id=17 op=UNLOAD Aug 12 23:49:39.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.227000 audit: BPF prog-id=18 op=UNLOAD Aug 12 23:49:39.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.313000 audit: BPF prog-id=21 op=LOAD Aug 12 23:49:39.314000 audit: BPF prog-id=22 op=LOAD Aug 12 23:49:39.314000 audit: BPF prog-id=23 op=LOAD Aug 12 23:49:39.314000 audit: BPF prog-id=19 op=UNLOAD Aug 12 23:49:39.314000 audit: BPF prog-id=20 op=UNLOAD Aug 12 23:49:39.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.338000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 12 23:49:39.338000 audit[998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdee17c90 a2=4000 a3=1 items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:49:39.338000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 12 23:49:39.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.214613 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:49:37.461974 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 12 23:49:39.214627 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 12 23:49:37.462253 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 12 23:49:39.219326 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:49:37.462272 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 12 23:49:37.462303 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 12 23:49:37.462312 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 12 23:49:37.462342 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 12 23:49:37.462354 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 12 23:49:37.462579 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 12 23:49:37.462614 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 12 23:49:39.343956 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:49:39.343978 systemd[1]: Finished modprobe@configfs.service. Aug 12 23:49:37.462625 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 12 23:49:37.463456 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 12 23:49:37.463494 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 12 23:49:37.463513 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 12 23:49:37.463527 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 12 23:49:37.463546 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 12 23:49:37.463560 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 12 23:49:38.943715 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 12 23:49:38.943989 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 12 23:49:38.944095 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 12 23:49:38.944265 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 12 23:49:38.944316 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 12 23:49:38.944374 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-08-12T23:49:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 12 23:49:39.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.345536 systemd[1]: Started systemd-journald.service. Aug 12 23:49:39.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.346128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:49:39.346290 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:49:39.347275 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:49:39.347415 systemd[1]: Finished modprobe@drm.service. Aug 12 23:49:39.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.348393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:49:39.348626 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:49:39.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.349582 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:49:39.349725 systemd[1]: Finished modprobe@fuse.service. Aug 12 23:49:39.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.350732 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:49:39.350919 systemd[1]: Finished modprobe@loop.service. Aug 12 23:49:39.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.352069 systemd[1]: Finished systemd-modules-load.service. Aug 12 23:49:39.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.353262 systemd[1]: Finished systemd-network-generator.service. Aug 12 23:49:39.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.354621 systemd[1]: Finished flatcar-tmpfiles.service. Aug 12 23:49:39.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.355629 systemd[1]: Finished systemd-remount-fs.service. Aug 12 23:49:39.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.356918 systemd[1]: Reached target network-pre.target. Aug 12 23:49:39.359084 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 12 23:49:39.361325 systemd[1]: Mounting sys-kernel-config.mount... Aug 12 23:49:39.362383 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:49:39.365470 systemd[1]: Starting systemd-hwdb-update.service... Aug 12 23:49:39.367681 systemd[1]: Starting systemd-journal-flush.service... Aug 12 23:49:39.368472 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:49:39.369905 systemd[1]: Starting systemd-random-seed.service... Aug 12 23:49:39.370859 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 12 23:49:39.372522 systemd[1]: Starting systemd-sysctl.service... Aug 12 23:49:39.374984 systemd[1]: Starting systemd-sysusers.service... Aug 12 23:49:39.378743 systemd-journald[998]: Time spent on flushing to /var/log/journal/d6eadc4d66104ff196f0e66fe05c3468 is 25.494ms for 1001 entries. Aug 12 23:49:39.378743 systemd-journald[998]: System Journal (/var/log/journal/d6eadc4d66104ff196f0e66fe05c3468) is 8.0M, max 195.6M, 187.6M free. Aug 12 23:49:39.421770 systemd-journald[998]: Received client request to flush runtime journal. Aug 12 23:49:39.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.380308 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 12 23:49:39.381239 systemd[1]: Mounted sys-kernel-config.mount. Aug 12 23:49:39.422340 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 12 23:49:39.397244 systemd[1]: Finished systemd-random-seed.service. Aug 12 23:49:39.398996 systemd[1]: Reached target first-boot-complete.target. Aug 12 23:49:39.402527 systemd[1]: Finished systemd-udev-trigger.service. Aug 12 23:49:39.405184 systemd[1]: Starting systemd-udev-settle.service... Aug 12 23:49:39.415470 systemd[1]: Finished systemd-sysctl.service. Aug 12 23:49:39.418149 systemd[1]: Finished systemd-sysusers.service. Aug 12 23:49:39.422755 systemd[1]: Finished systemd-journal-flush.service. Aug 12 23:49:39.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.778000 audit: BPF prog-id=24 op=LOAD Aug 12 23:49:39.778000 audit: BPF prog-id=25 op=LOAD Aug 12 23:49:39.778000 audit: BPF prog-id=7 op=UNLOAD Aug 12 23:49:39.778000 audit: BPF prog-id=8 op=UNLOAD Aug 12 23:49:39.777237 systemd[1]: Finished systemd-hwdb-update.service. Aug 12 23:49:39.779388 systemd[1]: Starting systemd-udevd.service... Aug 12 23:49:39.798090 systemd-udevd[1031]: Using default interface naming scheme 'v252'. Aug 12 23:49:39.810008 systemd[1]: Started systemd-udevd.service. Aug 12 23:49:39.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.811000 audit: BPF prog-id=26 op=LOAD Aug 12 23:49:39.812371 systemd[1]: Starting systemd-networkd.service... Aug 12 23:49:39.833000 audit: BPF prog-id=27 op=LOAD Aug 12 23:49:39.833000 audit: BPF prog-id=28 op=LOAD Aug 12 23:49:39.833000 audit: BPF prog-id=29 op=LOAD Aug 12 23:49:39.834492 systemd[1]: Starting systemd-userdbd.service... Aug 12 23:49:39.835474 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Aug 12 23:49:39.884438 systemd[1]: Started systemd-userdbd.service. Aug 12 23:49:39.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.896497 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 12 23:49:39.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.938568 systemd-networkd[1037]: lo: Link UP Aug 12 23:49:39.938576 systemd-networkd[1037]: lo: Gained carrier Aug 12 23:49:39.938895 systemd[1]: Finished systemd-udev-settle.service. Aug 12 23:49:39.939003 systemd-networkd[1037]: Enumeration completed Aug 12 23:49:39.939118 systemd-networkd[1037]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:49:39.939893 systemd[1]: Started systemd-networkd.service. Aug 12 23:49:39.940282 systemd-networkd[1037]: eth0: Link UP Aug 12 23:49:39.940286 systemd-networkd[1037]: eth0: Gained carrier Aug 12 23:49:39.941908 systemd[1]: Starting lvm2-activation-early.service... Aug 12 23:49:39.956514 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:49:39.961585 systemd-networkd[1037]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:49:39.983349 systemd[1]: Finished lvm2-activation-early.service. Aug 12 23:49:39.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:39.984259 systemd[1]: Reached target cryptsetup.target. Aug 12 23:49:39.986237 systemd[1]: Starting lvm2-activation.service... Aug 12 23:49:39.990088 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:49:40.019439 systemd[1]: Finished lvm2-activation.service. Aug 12 23:49:40.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.020259 systemd[1]: Reached target local-fs-pre.target. Aug 12 23:49:40.020952 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:49:40.020988 systemd[1]: Reached target local-fs.target. Aug 12 23:49:40.021576 systemd[1]: Reached target machines.target. Aug 12 23:49:40.023488 systemd[1]: Starting ldconfig.service... Aug 12 23:49:40.024365 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.024472 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:49:40.025849 systemd[1]: Starting systemd-boot-update.service... Aug 12 23:49:40.027981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 12 23:49:40.030192 systemd[1]: Starting systemd-machine-id-commit.service... Aug 12 23:49:40.033022 systemd[1]: Starting systemd-sysext.service... Aug 12 23:49:40.034210 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1067 (bootctl) Aug 12 23:49:40.035861 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 12 23:49:40.040229 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 12 23:49:40.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.052857 systemd[1]: Unmounting usr-share-oem.mount... Aug 12 23:49:40.059142 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 12 23:49:40.059368 systemd[1]: Unmounted usr-share-oem.mount. Aug 12 23:49:40.114518 kernel: loop0: detected capacity change from 0 to 203944 Aug 12 23:49:40.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.116997 systemd[1]: Finished systemd-machine-id-commit.service. Aug 12 23:49:40.123202 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) Aug 12 23:49:40.123202 systemd-fsck[1078]: /dev/vda1: 236 files, 117307/258078 clusters Aug 12 23:49:40.125310 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 12 23:49:40.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.130503 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:49:40.153505 kernel: loop1: detected capacity change from 0 to 203944 Aug 12 23:49:40.161025 (sd-sysext)[1082]: Using extensions 'kubernetes'. Aug 12 23:49:40.162458 (sd-sysext)[1082]: Merged extensions into '/usr'. Aug 12 23:49:40.180460 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.182392 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:49:40.184704 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:49:40.186821 systemd[1]: Starting modprobe@loop.service... Aug 12 23:49:40.187664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.187862 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:49:40.188740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:49:40.188892 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:49:40.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.190746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:49:40.190894 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:49:40.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.192311 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:49:40.192489 systemd[1]: Finished modprobe@loop.service. Aug 12 23:49:40.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.193808 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:49:40.193918 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.240157 ldconfig[1066]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:49:40.245841 systemd[1]: Finished ldconfig.service. Aug 12 23:49:40.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.332128 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:49:40.334017 systemd[1]: Mounting boot.mount... Aug 12 23:49:40.336018 systemd[1]: Mounting usr-share-oem.mount... Aug 12 23:49:40.343115 systemd[1]: Mounted boot.mount. Aug 12 23:49:40.344288 systemd[1]: Mounted usr-share-oem.mount. Aug 12 23:49:40.346227 systemd[1]: Finished systemd-sysext.service. Aug 12 23:49:40.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.349374 systemd[1]: Starting ensure-sysext.service... Aug 12 23:49:40.351154 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 12 23:49:40.352325 systemd[1]: Finished systemd-boot-update.service. Aug 12 23:49:40.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.356384 systemd[1]: Reloading. Aug 12 23:49:40.365055 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 12 23:49:40.377045 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:49:40.382383 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:49:40.385219 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-08-12T23:49:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 12 23:49:40.385588 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-08-12T23:49:40Z" level=info msg="torcx already run" Aug 12 23:49:40.461805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:49:40.461825 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:49:40.477252 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:49:40.520000 audit: BPF prog-id=30 op=LOAD Aug 12 23:49:40.520000 audit: BPF prog-id=26 op=UNLOAD Aug 12 23:49:40.521000 audit: BPF prog-id=31 op=LOAD Aug 12 23:49:40.521000 audit: BPF prog-id=27 op=UNLOAD Aug 12 23:49:40.521000 audit: BPF prog-id=32 op=LOAD Aug 12 23:49:40.521000 audit: BPF prog-id=33 op=LOAD Aug 12 23:49:40.521000 audit: BPF prog-id=28 op=UNLOAD Aug 12 23:49:40.521000 audit: BPF prog-id=29 op=UNLOAD Aug 12 23:49:40.521000 audit: BPF prog-id=34 op=LOAD Aug 12 23:49:40.521000 audit: BPF prog-id=35 op=LOAD Aug 12 23:49:40.521000 audit: BPF prog-id=24 op=UNLOAD Aug 12 23:49:40.521000 audit: BPF prog-id=25 op=UNLOAD Aug 12 23:49:40.522000 audit: BPF prog-id=36 op=LOAD Aug 12 23:49:40.522000 audit: BPF prog-id=21 op=UNLOAD Aug 12 23:49:40.522000 audit: BPF prog-id=37 op=LOAD Aug 12 23:49:40.522000 audit: BPF prog-id=38 op=LOAD Aug 12 23:49:40.522000 audit: BPF prog-id=22 op=UNLOAD Aug 12 23:49:40.522000 audit: BPF prog-id=23 op=UNLOAD Aug 12 23:49:40.524717 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 12 23:49:40.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.529023 systemd[1]: Starting audit-rules.service... Aug 12 23:49:40.530889 systemd[1]: Starting clean-ca-certificates.service... Aug 12 23:49:40.532944 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 12 23:49:40.534000 audit: BPF prog-id=39 op=LOAD Aug 12 23:49:40.539000 audit: BPF prog-id=40 op=LOAD Aug 12 23:49:40.538821 systemd[1]: Starting systemd-resolved.service... Aug 12 23:49:40.541530 systemd[1]: Starting systemd-timesyncd.service... Aug 12 23:49:40.544675 systemd[1]: Starting systemd-update-utmp.service... Aug 12 23:49:40.548858 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.549000 audit[1159]: SYSTEM_BOOT pid=1159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.550695 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:49:40.552446 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:49:40.554295 systemd[1]: Starting modprobe@loop.service... Aug 12 23:49:40.554951 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.555075 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:49:40.556045 systemd[1]: Finished clean-ca-certificates.service. Aug 12 23:49:40.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.557169 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:49:40.557290 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:49:40.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.558306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:49:40.558417 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:49:40.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.559556 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:49:40.559661 systemd[1]: Finished modprobe@loop.service. Aug 12 23:49:40.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.562610 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 12 23:49:40.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.564910 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.566329 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:49:40.568175 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:49:40.570785 systemd[1]: Starting modprobe@loop.service... Aug 12 23:49:40.571379 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.571612 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:49:40.573045 systemd[1]: Starting systemd-update-done.service... Aug 12 23:49:40.573778 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:49:40.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.575134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:49:40.575284 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:49:40.576507 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:49:40.576625 systemd[1]: Finished modprobe@loop.service. Aug 12 23:49:40.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.578104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:49:40.578262 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:49:40.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.579478 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:49:40.579602 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.580867 systemd[1]: Finished systemd-update-utmp.service. Aug 12 23:49:40.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 12 23:49:40.581948 systemd[1]: Finished systemd-update-done.service. Aug 12 23:49:40.585507 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.587100 systemd[1]: Starting modprobe@dm_mod.service... Aug 12 23:49:40.588900 systemd[1]: Starting modprobe@drm.service... Aug 12 23:49:40.593876 systemd[1]: Starting modprobe@efi_pstore.service... Aug 12 23:49:40.599878 systemd[1]: Starting modprobe@loop.service... Aug 12 23:49:40.601000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 12 23:49:40.601000 audit[1178]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe78f51d0 a2=420 a3=0 items=0 ppid=1148 pid=1178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 12 23:49:40.601000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 12 23:49:40.600604 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.602402 augenrules[1178]: No rules Aug 12 23:49:40.600749 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:49:40.602237 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 12 23:49:40.603141 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:49:40.604563 systemd[1]: Finished audit-rules.service. Aug 12 23:49:40.605635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:49:40.605763 systemd[1]: Finished modprobe@dm_mod.service. Aug 12 23:49:40.606959 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:49:40.607081 systemd[1]: Finished modprobe@drm.service. Aug 12 23:49:40.608375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:49:40.608594 systemd[1]: Finished modprobe@efi_pstore.service. Aug 12 23:49:40.609744 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:49:40.609958 systemd[1]: Finished modprobe@loop.service. Aug 12 23:49:40.611500 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:49:40.611599 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.612812 systemd[1]: Finished ensure-sysext.service. Aug 12 23:49:40.617363 systemd-resolved[1152]: Positive Trust Anchors: Aug 12 23:49:40.617374 systemd-resolved[1152]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:49:40.617402 systemd-resolved[1152]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 12 23:49:40.623568 systemd[1]: Started systemd-timesyncd.service. Aug 12 23:49:40.624507 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:49:40.624626 systemd[1]: Reached target time-set.target. Aug 12 23:49:40.624865 systemd-timesyncd[1158]: Initial clock synchronization to Tue 2025-08-12 23:49:40.300183 UTC. Aug 12 23:49:40.626495 systemd-resolved[1152]: Defaulting to hostname 'linux'. Aug 12 23:49:40.627938 systemd[1]: Started systemd-resolved.service. Aug 12 23:49:40.628578 systemd[1]: Reached target network.target. Aug 12 23:49:40.629139 systemd[1]: Reached target nss-lookup.target. Aug 12 23:49:40.629846 systemd[1]: Reached target sysinit.target. Aug 12 23:49:40.630481 systemd[1]: Started motdgen.path. Aug 12 23:49:40.631023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 12 23:49:40.632007 systemd[1]: Started logrotate.timer. Aug 12 23:49:40.632662 systemd[1]: Started mdadm.timer. Aug 12 23:49:40.633168 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 12 23:49:40.633846 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:49:40.633878 systemd[1]: Reached target paths.target. Aug 12 23:49:40.634415 systemd[1]: Reached target timers.target. Aug 12 23:49:40.635297 systemd[1]: Listening on dbus.socket. Aug 12 23:49:40.636997 systemd[1]: Starting docker.socket... Aug 12 23:49:40.643053 systemd[1]: Listening on sshd.socket. Aug 12 23:49:40.643788 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:49:40.644322 systemd[1]: Listening on docker.socket. Aug 12 23:49:40.645046 systemd[1]: Reached target sockets.target. Aug 12 23:49:40.645649 systemd[1]: Reached target basic.target. Aug 12 23:49:40.646236 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.646269 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 12 23:49:40.647319 systemd[1]: Starting containerd.service... Aug 12 23:49:40.649106 systemd[1]: Starting dbus.service... Aug 12 23:49:40.650835 systemd[1]: Starting enable-oem-cloudinit.service... Aug 12 23:49:40.652742 systemd[1]: Starting extend-filesystems.service... Aug 12 23:49:40.653463 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 12 23:49:40.656286 systemd[1]: Starting motdgen.service... Aug 12 23:49:40.658172 systemd[1]: Starting prepare-helm.service... Aug 12 23:49:40.660018 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 12 23:49:40.662077 systemd[1]: Starting sshd-keygen.service... Aug 12 23:49:40.665116 systemd[1]: Starting systemd-logind.service... Aug 12 23:49:40.667230 jq[1190]: false Aug 12 23:49:40.667438 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 12 23:49:40.667560 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:49:40.668061 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:49:40.668927 systemd[1]: Starting update-engine.service... Aug 12 23:49:40.670567 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 12 23:49:40.677503 jq[1208]: true Aug 12 23:49:40.683588 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:49:40.683802 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 12 23:49:40.684174 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:49:40.684327 systemd[1]: Finished motdgen.service. Aug 12 23:49:40.685307 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:49:40.685478 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 12 23:49:40.690389 extend-filesystems[1191]: Found loop1 Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda1 Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda2 Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda3 Aug 12 23:49:40.690389 extend-filesystems[1191]: Found usr Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda4 Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda6 Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda7 Aug 12 23:49:40.690389 extend-filesystems[1191]: Found vda9 Aug 12 23:49:40.690389 extend-filesystems[1191]: Checking size of /dev/vda9 Aug 12 23:49:40.726949 extend-filesystems[1191]: Resized partition /dev/vda9 Aug 12 23:49:40.727601 tar[1211]: linux-arm64/helm Aug 12 23:49:40.705035 systemd[1]: Started dbus.service. Aug 12 23:49:40.704851 dbus-daemon[1189]: [system] SELinux support is enabled Aug 12 23:49:40.728048 jq[1212]: true Aug 12 23:49:40.715785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:49:40.715828 systemd[1]: Reached target system-config.target. Aug 12 23:49:40.716547 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:49:40.716563 systemd[1]: Reached target user-config.target. Aug 12 23:49:40.730865 extend-filesystems[1222]: resize2fs 1.46.5 (30-Dec-2021) Aug 12 23:49:40.749457 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:49:40.773453 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:49:40.796197 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:49:40.797826 extend-filesystems[1222]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:49:40.797826 extend-filesystems[1222]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:49:40.797826 extend-filesystems[1222]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:49:40.797265 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:49:40.805575 extend-filesystems[1191]: Resized filesystem in /dev/vda9 Aug 12 23:49:40.797452 systemd[1]: Finished extend-filesystems.service. Aug 12 23:49:40.798995 systemd-logind[1203]: New seat seat0. Aug 12 23:49:40.810478 systemd[1]: Started systemd-logind.service. Aug 12 23:49:40.819158 update_engine[1207]: I0812 23:49:40.818795 1207 main.cc:92] Flatcar Update Engine starting Aug 12 23:49:40.827735 env[1213]: time="2025-08-12T23:49:40.826179760Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 12 23:49:40.841481 systemd[1]: Started update-engine.service. Aug 12 23:49:40.841682 update_engine[1207]: I0812 23:49:40.841518 1207 update_check_scheduler.cc:74] Next update check in 8m2s Aug 12 23:49:40.844069 systemd[1]: Started locksmithd.service. Aug 12 23:49:40.851176 env[1213]: time="2025-08-12T23:49:40.851117440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:49:40.851322 env[1213]: time="2025-08-12T23:49:40.851300240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.854460640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.854507800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.854747680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.854767240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.854780360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.854801240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.854884400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.855183200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.855329760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:49:40.855625 env[1213]: time="2025-08-12T23:49:40.855347880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:49:40.855952 bash[1240]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:49:40.856030 env[1213]: time="2025-08-12T23:49:40.855415880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 12 23:49:40.856030 env[1213]: time="2025-08-12T23:49:40.855445880Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:49:40.856223 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 12 23:49:40.863451 env[1213]: time="2025-08-12T23:49:40.863272760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:49:40.863451 env[1213]: time="2025-08-12T23:49:40.863322720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:49:40.863451 env[1213]: time="2025-08-12T23:49:40.863336360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:49:40.863451 env[1213]: time="2025-08-12T23:49:40.863381600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.863451 env[1213]: time="2025-08-12T23:49:40.863398760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.863451 env[1213]: time="2025-08-12T23:49:40.863412440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.863687 env[1213]: time="2025-08-12T23:49:40.863510880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.863941 env[1213]: time="2025-08-12T23:49:40.863908440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.863977 env[1213]: time="2025-08-12T23:49:40.863943600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.863977 env[1213]: time="2025-08-12T23:49:40.863962760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.864025 env[1213]: time="2025-08-12T23:49:40.863975520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.864025 env[1213]: time="2025-08-12T23:49:40.863990160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:49:40.864161 env[1213]: time="2025-08-12T23:49:40.864142920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:49:40.864247 env[1213]: time="2025-08-12T23:49:40.864232240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:49:40.864570 env[1213]: time="2025-08-12T23:49:40.864548560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:49:40.864608 env[1213]: time="2025-08-12T23:49:40.864587800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864608 env[1213]: time="2025-08-12T23:49:40.864603720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:49:40.864739 env[1213]: time="2025-08-12T23:49:40.864725040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864739 env[1213]: time="2025-08-12T23:49:40.864742600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864739 env[1213]: time="2025-08-12T23:49:40.864755720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864874 env[1213]: time="2025-08-12T23:49:40.864766600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864874 env[1213]: time="2025-08-12T23:49:40.864857760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864927 env[1213]: time="2025-08-12T23:49:40.864882160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864927 env[1213]: time="2025-08-12T23:49:40.864894280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.864927 env[1213]: time="2025-08-12T23:49:40.864915480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.865009 env[1213]: time="2025-08-12T23:49:40.864927920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:49:40.865120 env[1213]: time="2025-08-12T23:49:40.865094640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.865165 env[1213]: time="2025-08-12T23:49:40.865121040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.865165 env[1213]: time="2025-08-12T23:49:40.865133880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.865165 env[1213]: time="2025-08-12T23:49:40.865147160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:49:40.865165 env[1213]: time="2025-08-12T23:49:40.865172320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 12 23:49:40.865165 env[1213]: time="2025-08-12T23:49:40.865184480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:49:40.865303 env[1213]: time="2025-08-12T23:49:40.865201640Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 12 23:49:40.865303 env[1213]: time="2025-08-12T23:49:40.865236600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:49:40.865517 env[1213]: time="2025-08-12T23:49:40.865454040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.865533280Z" level=info msg="Connect containerd service" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.865575000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.866290040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.866582680Z" level=info msg="Start subscribing containerd event" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.866626240Z" level=info msg="Start recovering state" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.866688480Z" level=info msg="Start event monitor" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.866707840Z" level=info msg="Start snapshots syncer" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.866717800Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:49:40.868135 env[1213]: time="2025-08-12T23:49:40.866725080Z" level=info msg="Start streaming server" Aug 12 23:49:40.868961 env[1213]: time="2025-08-12T23:49:40.868928320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:49:40.869020 env[1213]: time="2025-08-12T23:49:40.868979960Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:49:40.869042 env[1213]: time="2025-08-12T23:49:40.869031720Z" level=info msg="containerd successfully booted in 0.043715s" Aug 12 23:49:40.869118 systemd[1]: Started containerd.service. Aug 12 23:49:40.901122 locksmithd[1246]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:49:41.122101 tar[1211]: linux-arm64/LICENSE Aug 12 23:49:41.122199 tar[1211]: linux-arm64/README.md Aug 12 23:49:41.126297 systemd[1]: Finished prepare-helm.service. Aug 12 23:49:41.402610 systemd-networkd[1037]: eth0: Gained IPv6LL Aug 12 23:49:41.404213 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 12 23:49:41.405267 systemd[1]: Reached target network-online.target. Aug 12 23:49:41.407508 systemd[1]: Starting kubelet.service... Aug 12 23:49:41.943400 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:49:41.960992 systemd[1]: Finished sshd-keygen.service. Aug 12 23:49:41.963052 systemd[1]: Starting issuegen.service... Aug 12 23:49:41.967581 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:49:41.967731 systemd[1]: Finished issuegen.service. Aug 12 23:49:41.969742 systemd[1]: Starting systemd-user-sessions.service... Aug 12 23:49:41.976596 systemd[1]: Finished systemd-user-sessions.service. Aug 12 23:49:41.978867 systemd[1]: Started getty@tty1.service. Aug 12 23:49:41.980815 systemd[1]: Started serial-getty@ttyAMA0.service. Aug 12 23:49:41.981649 systemd[1]: Reached target getty.target. Aug 12 23:49:42.044681 systemd[1]: Started kubelet.service. Aug 12 23:49:42.045744 systemd[1]: Reached target multi-user.target. Aug 12 23:49:42.047583 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 12 23:49:42.054340 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 12 23:49:42.054506 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 12 23:49:42.055378 systemd[1]: Startup finished in 608ms (kernel) + 4.700s (initrd) + 4.782s (userspace) = 10.091s. Aug 12 23:49:42.528660 kubelet[1273]: E0812 23:49:42.528598 1273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:49:42.530408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:49:42.530546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:49:45.286992 systemd[1]: Created slice system-sshd.slice. Aug 12 23:49:45.288047 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:36058.service. Aug 12 23:49:45.347098 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 36058 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:49:45.349801 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:49:45.360382 systemd-logind[1203]: New session 1 of user core. Aug 12 23:49:45.361317 systemd[1]: Created slice user-500.slice. Aug 12 23:49:45.362439 systemd[1]: Starting user-runtime-dir@500.service... Aug 12 23:49:45.371503 systemd[1]: Finished user-runtime-dir@500.service. Aug 12 23:49:45.372940 systemd[1]: Starting user@500.service... Aug 12 23:49:45.377824 (systemd)[1285]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:49:45.438253 systemd[1285]: Queued start job for default target default.target. Aug 12 23:49:45.438771 systemd[1285]: Reached target paths.target. Aug 12 23:49:45.438802 systemd[1285]: Reached target sockets.target. Aug 12 23:49:45.438813 systemd[1285]: Reached target timers.target. Aug 12 23:49:45.438822 systemd[1285]: Reached target basic.target. Aug 12 23:49:45.438862 systemd[1285]: Reached target default.target. Aug 12 23:49:45.438886 systemd[1285]: Startup finished in 54ms. Aug 12 23:49:45.438959 systemd[1]: Started user@500.service. Aug 12 23:49:45.440153 systemd[1]: Started session-1.scope. Aug 12 23:49:45.493824 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:36060.service. Aug 12 23:49:45.558446 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 36060 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:49:45.559896 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:49:45.565525 systemd[1]: Started session-2.scope. Aug 12 23:49:45.565698 systemd-logind[1203]: New session 2 of user core. Aug 12 23:49:45.620399 sshd[1294]: pam_unix(sshd:session): session closed for user core Aug 12 23:49:45.625217 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:36060.service: Deactivated successfully. Aug 12 23:49:45.625821 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:49:45.626304 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:49:45.627264 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:36070.service. Aug 12 23:49:45.628057 systemd-logind[1203]: Removed session 2. Aug 12 23:49:45.670400 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 36070 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:49:45.671756 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:49:45.675584 systemd-logind[1203]: New session 3 of user core. Aug 12 23:49:45.675987 systemd[1]: Started session-3.scope. Aug 12 23:49:45.725969 sshd[1300]: pam_unix(sshd:session): session closed for user core Aug 12 23:49:45.729771 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:36084.service. Aug 12 23:49:45.730289 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:36070.service: Deactivated successfully. Aug 12 23:49:45.731110 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:49:45.731598 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:49:45.732254 systemd-logind[1203]: Removed session 3. Aug 12 23:49:45.774104 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 36084 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:49:45.775470 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:49:45.781528 systemd-logind[1203]: New session 4 of user core. Aug 12 23:49:45.782216 systemd[1]: Started session-4.scope. Aug 12 23:49:45.837156 sshd[1305]: pam_unix(sshd:session): session closed for user core Aug 12 23:49:45.842315 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:36084.service: Deactivated successfully. Aug 12 23:49:45.843021 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:49:45.843801 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:49:45.845419 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:36090.service. Aug 12 23:49:45.846227 systemd-logind[1203]: Removed session 4. Aug 12 23:49:45.895243 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 36090 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:49:45.896893 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:49:45.900193 systemd-logind[1203]: New session 5 of user core. Aug 12 23:49:45.900994 systemd[1]: Started session-5.scope. Aug 12 23:49:45.969095 sudo[1316]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:49:45.969537 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 12 23:49:46.036936 systemd[1]: Starting docker.service... Aug 12 23:49:46.130762 env[1328]: time="2025-08-12T23:49:46.130646455Z" level=info msg="Starting up" Aug 12 23:49:46.132548 env[1328]: time="2025-08-12T23:49:46.132513186Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 12 23:49:46.132548 env[1328]: time="2025-08-12T23:49:46.132540369Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 12 23:49:46.132808 env[1328]: time="2025-08-12T23:49:46.132768839Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 12 23:49:46.132808 env[1328]: time="2025-08-12T23:49:46.132806559Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 12 23:49:46.134833 env[1328]: time="2025-08-12T23:49:46.134809557Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 12 23:49:46.134833 env[1328]: time="2025-08-12T23:49:46.134829807Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 12 23:49:46.134938 env[1328]: time="2025-08-12T23:49:46.134845788Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 12 23:49:46.134938 env[1328]: time="2025-08-12T23:49:46.134854248Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 12 23:49:46.139053 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3913566389-merged.mount: Deactivated successfully. Aug 12 23:49:46.257018 env[1328]: time="2025-08-12T23:49:46.256975242Z" level=info msg="Loading containers: start." Aug 12 23:49:46.404457 kernel: Initializing XFRM netlink socket Aug 12 23:49:46.435373 env[1328]: time="2025-08-12T23:49:46.435319952Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 12 23:49:46.506242 systemd-networkd[1037]: docker0: Link UP Aug 12 23:49:46.527111 env[1328]: time="2025-08-12T23:49:46.527059625Z" level=info msg="Loading containers: done." Aug 12 23:49:46.550144 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck219935484-merged.mount: Deactivated successfully. Aug 12 23:49:46.558212 env[1328]: time="2025-08-12T23:49:46.558139280Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:49:46.558813 env[1328]: time="2025-08-12T23:49:46.558765702Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 12 23:49:46.559327 env[1328]: time="2025-08-12T23:49:46.559253742Z" level=info msg="Daemon has completed initialization" Aug 12 23:49:46.582327 systemd[1]: Started docker.service. Aug 12 23:49:46.588653 env[1328]: time="2025-08-12T23:49:46.588514294Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:49:47.422069 env[1213]: time="2025-08-12T23:49:47.422007469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 12 23:49:48.167362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040821512.mount: Deactivated successfully. Aug 12 23:49:49.292382 env[1213]: time="2025-08-12T23:49:49.292332884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:49.293722 env[1213]: time="2025-08-12T23:49:49.293695520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:49.296545 env[1213]: time="2025-08-12T23:49:49.296504332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:49.298166 env[1213]: time="2025-08-12T23:49:49.298134233Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:49.298968 env[1213]: time="2025-08-12T23:49:49.298901438Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 12 23:49:49.302379 env[1213]: time="2025-08-12T23:49:49.302352776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 12 23:49:50.638734 env[1213]: time="2025-08-12T23:49:50.638689242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:50.640088 env[1213]: time="2025-08-12T23:49:50.640048277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:50.642504 env[1213]: time="2025-08-12T23:49:50.642474232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:50.644448 env[1213]: time="2025-08-12T23:49:50.644397112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:50.644983 env[1213]: time="2025-08-12T23:49:50.644938790Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 12 23:49:50.645893 env[1213]: time="2025-08-12T23:49:50.645866229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 12 23:49:51.824899 env[1213]: time="2025-08-12T23:49:51.824852461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:51.826271 env[1213]: time="2025-08-12T23:49:51.826238049Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:51.828038 env[1213]: time="2025-08-12T23:49:51.828007222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:51.830349 env[1213]: time="2025-08-12T23:49:51.830318197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:51.831007 env[1213]: time="2025-08-12T23:49:51.830968706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 12 23:49:51.831490 env[1213]: time="2025-08-12T23:49:51.831465750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 12 23:49:52.555669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:49:52.555845 systemd[1]: Stopped kubelet.service. Aug 12 23:49:52.557239 systemd[1]: Starting kubelet.service... Aug 12 23:49:52.650580 systemd[1]: Started kubelet.service. Aug 12 23:49:52.688730 kubelet[1463]: E0812 23:49:52.688671 1463 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:49:52.691258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:49:52.691374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:49:52.855741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346820529.mount: Deactivated successfully. Aug 12 23:49:53.627411 env[1213]: time="2025-08-12T23:49:53.627359094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:53.628713 env[1213]: time="2025-08-12T23:49:53.628684227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:53.630008 env[1213]: time="2025-08-12T23:49:53.629973615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:53.631348 env[1213]: time="2025-08-12T23:49:53.631322394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:53.631773 env[1213]: time="2025-08-12T23:49:53.631744877Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 12 23:49:53.632374 env[1213]: time="2025-08-12T23:49:53.632347319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:49:54.217880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693970887.mount: Deactivated successfully. Aug 12 23:49:55.286747 env[1213]: time="2025-08-12T23:49:55.286675480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.289107 env[1213]: time="2025-08-12T23:49:55.288266156Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.290686 env[1213]: time="2025-08-12T23:49:55.290651116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.292527 env[1213]: time="2025-08-12T23:49:55.292487686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.293483 env[1213]: time="2025-08-12T23:49:55.293443156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:49:55.296810 env[1213]: time="2025-08-12T23:49:55.296770033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:49:55.849682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218318042.mount: Deactivated successfully. Aug 12 23:49:55.857671 env[1213]: time="2025-08-12T23:49:55.857618202Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.859488 env[1213]: time="2025-08-12T23:49:55.859450995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.861034 env[1213]: time="2025-08-12T23:49:55.861002398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.862413 env[1213]: time="2025-08-12T23:49:55.862379616Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:55.862881 env[1213]: time="2025-08-12T23:49:55.862854589Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:49:55.863464 env[1213]: time="2025-08-12T23:49:55.863438556Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 12 23:49:56.537442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802427563.mount: Deactivated successfully. Aug 12 23:49:58.672200 env[1213]: time="2025-08-12T23:49:58.672130770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:58.714889 env[1213]: time="2025-08-12T23:49:58.714839438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:58.749206 env[1213]: time="2025-08-12T23:49:58.749146950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:58.772971 env[1213]: time="2025-08-12T23:49:58.772927881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:49:58.773902 env[1213]: time="2025-08-12T23:49:58.773870000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 12 23:50:02.805792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:50:02.806012 systemd[1]: Stopped kubelet.service. Aug 12 23:50:02.807736 systemd[1]: Starting kubelet.service... Aug 12 23:50:02.946889 systemd[1]: Started kubelet.service. Aug 12 23:50:03.006313 kubelet[1495]: E0812 23:50:03.006252 1495 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:50:03.008313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:50:03.008463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:50:03.775466 systemd[1]: Stopped kubelet.service. Aug 12 23:50:03.777803 systemd[1]: Starting kubelet.service... Aug 12 23:50:03.811470 systemd[1]: Reloading. Aug 12 23:50:03.878286 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-08-12T23:50:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 12 23:50:03.878322 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-08-12T23:50:03Z" level=info msg="torcx already run" Aug 12 23:50:04.089053 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:50:04.089245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:50:04.104879 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:50:04.178812 systemd[1]: Started kubelet.service. Aug 12 23:50:04.182618 systemd[1]: Stopping kubelet.service... Aug 12 23:50:04.183059 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:50:04.183229 systemd[1]: Stopped kubelet.service. Aug 12 23:50:04.184793 systemd[1]: Starting kubelet.service... Aug 12 23:50:04.275866 systemd[1]: Started kubelet.service. Aug 12 23:50:04.315586 kubelet[1579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:50:04.315916 kubelet[1579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:50:04.315969 kubelet[1579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:50:04.316107 kubelet[1579]: I0812 23:50:04.316076 1579 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:50:06.094725 kubelet[1579]: I0812 23:50:06.094680 1579 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:50:06.095077 kubelet[1579]: I0812 23:50:06.095063 1579 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:50:06.095442 kubelet[1579]: I0812 23:50:06.095403 1579 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:50:06.140856 kubelet[1579]: E0812 23:50:06.140806 1579 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:06.141984 kubelet[1579]: I0812 23:50:06.141950 1579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:50:06.150607 kubelet[1579]: E0812 23:50:06.150572 1579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:50:06.150763 kubelet[1579]: I0812 23:50:06.150746 1579 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:50:06.154592 kubelet[1579]: I0812 23:50:06.154567 1579 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:50:06.155528 kubelet[1579]: I0812 23:50:06.155504 1579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:50:06.155813 kubelet[1579]: I0812 23:50:06.155788 1579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:50:06.156059 kubelet[1579]: I0812 23:50:06.155878 1579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:50:06.156237 kubelet[1579]: I0812 23:50:06.156225 1579 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:50:06.156301 kubelet[1579]: I0812 23:50:06.156292 1579 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:50:06.156614 kubelet[1579]: I0812 23:50:06.156596 1579 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:50:06.161781 kubelet[1579]: I0812 23:50:06.161752 1579 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:50:06.161914 kubelet[1579]: I0812 23:50:06.161902 1579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:50:06.161994 kubelet[1579]: I0812 23:50:06.161983 1579 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:50:06.162128 kubelet[1579]: I0812 23:50:06.162117 1579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:50:06.163123 kubelet[1579]: W0812 23:50:06.163067 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:06.163196 kubelet[1579]: E0812 23:50:06.163135 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:06.164445 kubelet[1579]: W0812 23:50:06.164394 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:06.164516 kubelet[1579]: E0812 23:50:06.164454 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:06.170023 kubelet[1579]: I0812 23:50:06.170002 1579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 12 23:50:06.171034 kubelet[1579]: I0812 23:50:06.171010 1579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:50:06.171234 kubelet[1579]: W0812 23:50:06.171222 1579 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:50:06.172294 kubelet[1579]: I0812 23:50:06.172278 1579 server.go:1274] "Started kubelet" Aug 12 23:50:06.172583 kubelet[1579]: I0812 23:50:06.172551 1579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:50:06.175201 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 12 23:50:06.175345 kubelet[1579]: I0812 23:50:06.175318 1579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:50:06.175867 kubelet[1579]: I0812 23:50:06.175843 1579 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:50:06.177828 kubelet[1579]: I0812 23:50:06.177752 1579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:50:06.178069 kubelet[1579]: I0812 23:50:06.178044 1579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:50:06.179488 kubelet[1579]: I0812 23:50:06.179412 1579 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:50:06.180879 kubelet[1579]: E0812 23:50:06.180845 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:50:06.181467 kubelet[1579]: E0812 23:50:06.181399 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Aug 12 23:50:06.181538 kubelet[1579]: I0812 23:50:06.181466 1579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:50:06.181601 kubelet[1579]: W0812 23:50:06.181542 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:06.181639 kubelet[1579]: E0812 23:50:06.181615 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:06.181700 kubelet[1579]: I0812 23:50:06.181681 1579 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:50:06.181930 kubelet[1579]: I0812 23:50:06.181902 1579 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:50:06.182639 kubelet[1579]: E0812 23:50:06.181500 1579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b29f3a292b149 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:50:06.172254537 +0000 UTC m=+1.893400752,LastTimestamp:2025-08-12 23:50:06.172254537 +0000 UTC m=+1.893400752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:50:06.184215 kubelet[1579]: E0812 23:50:06.184191 1579 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:50:06.185842 kubelet[1579]: I0812 23:50:06.185819 1579 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:50:06.185842 kubelet[1579]: I0812 23:50:06.185842 1579 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:50:06.185948 kubelet[1579]: I0812 23:50:06.185928 1579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:50:06.198270 kubelet[1579]: I0812 23:50:06.198246 1579 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:50:06.198270 kubelet[1579]: I0812 23:50:06.198265 1579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:50:06.198438 kubelet[1579]: I0812 23:50:06.198285 1579 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:50:06.199725 kubelet[1579]: I0812 23:50:06.199697 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:50:06.200817 kubelet[1579]: I0812 23:50:06.200796 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:50:06.200911 kubelet[1579]: I0812 23:50:06.200900 1579 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:50:06.201001 kubelet[1579]: I0812 23:50:06.200989 1579 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:50:06.201107 kubelet[1579]: E0812 23:50:06.201088 1579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:50:06.205798 kubelet[1579]: W0812 23:50:06.205742 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:06.205922 kubelet[1579]: E0812 23:50:06.205808 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:06.281741 kubelet[1579]: E0812 23:50:06.281702 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:50:06.302064 kubelet[1579]: E0812 23:50:06.302026 1579 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:50:06.308767 kubelet[1579]: I0812 23:50:06.308726 1579 policy_none.go:49] "None policy: Start" Aug 12 23:50:06.309801 kubelet[1579]: I0812 23:50:06.309777 1579 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:50:06.309882 kubelet[1579]: I0812 23:50:06.309810 1579 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:50:06.319049 systemd[1]: Created slice kubepods.slice. Aug 12 23:50:06.323340 systemd[1]: Created slice kubepods-burstable.slice. Aug 12 23:50:06.325763 systemd[1]: Created slice kubepods-besteffort.slice. Aug 12 23:50:06.334168 kubelet[1579]: I0812 23:50:06.334122 1579 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:50:06.334374 kubelet[1579]: I0812 23:50:06.334298 1579 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:50:06.334374 kubelet[1579]: I0812 23:50:06.334314 1579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:50:06.334810 kubelet[1579]: I0812 23:50:06.334681 1579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:50:06.336300 kubelet[1579]: E0812 23:50:06.336278 1579 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:50:06.382463 kubelet[1579]: E0812 23:50:06.382332 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Aug 12 23:50:06.436537 kubelet[1579]: I0812 23:50:06.436477 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:50:06.437024 kubelet[1579]: E0812 23:50:06.436975 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Aug 12 23:50:06.509442 systemd[1]: Created slice kubepods-burstable-pod8587ded5325d6ff523f4e0a699619b42.slice. Aug 12 23:50:06.522454 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 12 23:50:06.538920 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 12 23:50:06.638935 kubelet[1579]: I0812 23:50:06.638807 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:50:06.640157 kubelet[1579]: E0812 23:50:06.640114 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Aug 12 23:50:06.683183 kubelet[1579]: I0812 23:50:06.683125 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:06.683266 kubelet[1579]: I0812 23:50:06.683190 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:06.683266 kubelet[1579]: I0812 23:50:06.683233 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:50:06.683266 kubelet[1579]: I0812 23:50:06.683254 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:06.683352 kubelet[1579]: I0812 23:50:06.683270 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:06.683352 kubelet[1579]: I0812 23:50:06.683289 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:06.683352 kubelet[1579]: I0812 23:50:06.683306 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8587ded5325d6ff523f4e0a699619b42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8587ded5325d6ff523f4e0a699619b42\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:50:06.683352 kubelet[1579]: I0812 23:50:06.683321 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8587ded5325d6ff523f4e0a699619b42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8587ded5325d6ff523f4e0a699619b42\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:50:06.683352 kubelet[1579]: I0812 23:50:06.683338 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8587ded5325d6ff523f4e0a699619b42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8587ded5325d6ff523f4e0a699619b42\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:50:06.784446 kubelet[1579]: E0812 23:50:06.784341 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Aug 12 23:50:06.820843 kubelet[1579]: E0812 23:50:06.820767 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:06.821605 env[1213]: time="2025-08-12T23:50:06.821549943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8587ded5325d6ff523f4e0a699619b42,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:06.824376 kubelet[1579]: E0812 23:50:06.824341 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:06.825213 env[1213]: time="2025-08-12T23:50:06.825148518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:06.841569 kubelet[1579]: E0812 23:50:06.841530 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:06.842315 env[1213]: time="2025-08-12T23:50:06.842261942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:07.043270 kubelet[1579]: I0812 23:50:07.043075 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:50:07.044680 kubelet[1579]: E0812 23:50:07.044600 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Aug 12 23:50:07.454489 kubelet[1579]: W0812 23:50:07.454393 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:07.454489 kubelet[1579]: E0812 23:50:07.454487 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:07.492998 kubelet[1579]: W0812 23:50:07.492951 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:07.493143 kubelet[1579]: E0812 23:50:07.493000 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:07.571094 kubelet[1579]: W0812 23:50:07.571012 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:07.571094 kubelet[1579]: E0812 23:50:07.571086 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:07.585216 kubelet[1579]: E0812 23:50:07.585158 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" Aug 12 23:50:07.628864 kubelet[1579]: W0812 23:50:07.628787 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Aug 12 23:50:07.628864 kubelet[1579]: E0812 23:50:07.628838 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:07.846322 kubelet[1579]: I0812 23:50:07.846258 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:50:07.846681 kubelet[1579]: E0812 23:50:07.846643 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Aug 12 23:50:07.991264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2353937503.mount: Deactivated successfully. Aug 12 23:50:08.089978 env[1213]: time="2025-08-12T23:50:08.089933215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.138077 env[1213]: time="2025-08-12T23:50:08.137951944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.161061 env[1213]: time="2025-08-12T23:50:08.161012955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.175042 env[1213]: time="2025-08-12T23:50:08.175000411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.201172 kubelet[1579]: E0812 23:50:08.201123 1579 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:50:08.204833 env[1213]: time="2025-08-12T23:50:08.204787365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.222858 env[1213]: time="2025-08-12T23:50:08.222803418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.252899 env[1213]: time="2025-08-12T23:50:08.252061795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.300061 env[1213]: time="2025-08-12T23:50:08.300017833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.305524 env[1213]: time="2025-08-12T23:50:08.304143724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.307543 env[1213]: time="2025-08-12T23:50:08.307488835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.313028 env[1213]: time="2025-08-12T23:50:08.312980939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.315829 env[1213]: time="2025-08-12T23:50:08.315767826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:08.376646 env[1213]: time="2025-08-12T23:50:08.376301755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:08.376646 env[1213]: time="2025-08-12T23:50:08.376339353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:08.376646 env[1213]: time="2025-08-12T23:50:08.376349023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:08.376842 env[1213]: time="2025-08-12T23:50:08.376617766Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af0fe3920f9ee83a1bac23b37a7677c4ca0e03adb4cd9541c32c9008baee8be9 pid=1622 runtime=io.containerd.runc.v2 Aug 12 23:50:08.377995 env[1213]: time="2025-08-12T23:50:08.377913258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:08.377995 env[1213]: time="2025-08-12T23:50:08.377956410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:08.377995 env[1213]: time="2025-08-12T23:50:08.377967518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:08.378698 env[1213]: time="2025-08-12T23:50:08.378379384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fd2af75e8d0b14b5c975165b6339e78587977abdef44567440131d4ecb99cd2 pid=1630 runtime=io.containerd.runc.v2 Aug 12 23:50:08.385395 env[1213]: time="2025-08-12T23:50:08.384587538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:08.385395 env[1213]: time="2025-08-12T23:50:08.384644395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:08.385395 env[1213]: time="2025-08-12T23:50:08.384655144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:08.385395 env[1213]: time="2025-08-12T23:50:08.384962724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8260ebba977d5f823e98ce57df354807efe2d015bd6fb5362d36098d8e4c733 pid=1657 runtime=io.containerd.runc.v2 Aug 12 23:50:08.392218 systemd[1]: Started cri-containerd-1fd2af75e8d0b14b5c975165b6339e78587977abdef44567440131d4ecb99cd2.scope. Aug 12 23:50:08.395295 systemd[1]: Started cri-containerd-af0fe3920f9ee83a1bac23b37a7677c4ca0e03adb4cd9541c32c9008baee8be9.scope. Aug 12 23:50:08.411083 systemd[1]: Started cri-containerd-d8260ebba977d5f823e98ce57df354807efe2d015bd6fb5362d36098d8e4c733.scope. Aug 12 23:50:08.469764 env[1213]: time="2025-08-12T23:50:08.469717825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd2af75e8d0b14b5c975165b6339e78587977abdef44567440131d4ecb99cd2\"" Aug 12 23:50:08.471258 kubelet[1579]: E0812 23:50:08.471027 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:08.474018 env[1213]: time="2025-08-12T23:50:08.473981963Z" level=info msg="CreateContainer within sandbox \"1fd2af75e8d0b14b5c975165b6339e78587977abdef44567440131d4ecb99cd2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:50:08.477533 env[1213]: time="2025-08-12T23:50:08.477480585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8587ded5325d6ff523f4e0a699619b42,Namespace:kube-system,Attempt:0,} returns sandbox id \"af0fe3920f9ee83a1bac23b37a7677c4ca0e03adb4cd9541c32c9008baee8be9\"" Aug 12 23:50:08.478442 kubelet[1579]: E0812 23:50:08.478245 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:08.480494 env[1213]: time="2025-08-12T23:50:08.480456783Z" level=info msg="CreateContainer within sandbox \"af0fe3920f9ee83a1bac23b37a7677c4ca0e03adb4cd9541c32c9008baee8be9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:50:08.484719 env[1213]: time="2025-08-12T23:50:08.484677049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8260ebba977d5f823e98ce57df354807efe2d015bd6fb5362d36098d8e4c733\"" Aug 12 23:50:08.485987 kubelet[1579]: E0812 23:50:08.485803 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:08.487883 env[1213]: time="2025-08-12T23:50:08.487839322Z" level=info msg="CreateContainer within sandbox \"d8260ebba977d5f823e98ce57df354807efe2d015bd6fb5362d36098d8e4c733\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:50:08.508628 env[1213]: time="2025-08-12T23:50:08.508566266Z" level=info msg="CreateContainer within sandbox \"af0fe3920f9ee83a1bac23b37a7677c4ca0e03adb4cd9541c32c9008baee8be9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bb7d0025d3813a3da034c97857ac981de00562e472d0f888ed9151e72f56f9a\"" Aug 12 23:50:08.509324 env[1213]: time="2025-08-12T23:50:08.509289948Z" level=info msg="StartContainer for \"7bb7d0025d3813a3da034c97857ac981de00562e472d0f888ed9151e72f56f9a\"" Aug 12 23:50:08.514909 env[1213]: time="2025-08-12T23:50:08.514860925Z" level=info msg="CreateContainer within sandbox \"1fd2af75e8d0b14b5c975165b6339e78587977abdef44567440131d4ecb99cd2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e15d0750dc2a639c7d9a318da3043ce428e5babbbe1add6399ce386fc98f209b\"" Aug 12 23:50:08.515619 env[1213]: time="2025-08-12T23:50:08.515591160Z" level=info msg="StartContainer for \"e15d0750dc2a639c7d9a318da3043ce428e5babbbe1add6399ce386fc98f209b\"" Aug 12 23:50:08.517316 env[1213]: time="2025-08-12T23:50:08.517266992Z" level=info msg="CreateContainer within sandbox \"d8260ebba977d5f823e98ce57df354807efe2d015bd6fb5362d36098d8e4c733\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"add6139790d042257ae2e45333ccadf8c1d30b18eb4ea28cde570eff12fea6b9\"" Aug 12 23:50:08.517864 env[1213]: time="2025-08-12T23:50:08.517826975Z" level=info msg="StartContainer for \"add6139790d042257ae2e45333ccadf8c1d30b18eb4ea28cde570eff12fea6b9\"" Aug 12 23:50:08.532856 systemd[1]: Started cri-containerd-e15d0750dc2a639c7d9a318da3043ce428e5babbbe1add6399ce386fc98f209b.scope. Aug 12 23:50:08.538962 systemd[1]: Started cri-containerd-7bb7d0025d3813a3da034c97857ac981de00562e472d0f888ed9151e72f56f9a.scope. Aug 12 23:50:08.548515 systemd[1]: Started cri-containerd-add6139790d042257ae2e45333ccadf8c1d30b18eb4ea28cde570eff12fea6b9.scope. Aug 12 23:50:08.616832 env[1213]: time="2025-08-12T23:50:08.616775024Z" level=info msg="StartContainer for \"7bb7d0025d3813a3da034c97857ac981de00562e472d0f888ed9151e72f56f9a\" returns successfully" Aug 12 23:50:08.643827 env[1213]: time="2025-08-12T23:50:08.643773812Z" level=info msg="StartContainer for \"add6139790d042257ae2e45333ccadf8c1d30b18eb4ea28cde570eff12fea6b9\" returns successfully" Aug 12 23:50:08.651797 env[1213]: time="2025-08-12T23:50:08.648132486Z" level=info msg="StartContainer for \"e15d0750dc2a639c7d9a318da3043ce428e5babbbe1add6399ce386fc98f209b\" returns successfully" Aug 12 23:50:09.221254 kubelet[1579]: E0812 23:50:09.221183 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:09.222825 kubelet[1579]: E0812 23:50:09.222715 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:09.224192 kubelet[1579]: E0812 23:50:09.224123 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:09.447932 kubelet[1579]: I0812 23:50:09.447895 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:50:10.225855 kubelet[1579]: E0812 23:50:10.225828 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:10.425129 kubelet[1579]: E0812 23:50:10.425089 1579 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:50:10.548495 kubelet[1579]: I0812 23:50:10.548344 1579 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:50:10.548495 kubelet[1579]: E0812 23:50:10.548400 1579 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 12 23:50:10.583129 kubelet[1579]: E0812 23:50:10.582839 1579 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185b29f3a292b149 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:50:06.172254537 +0000 UTC m=+1.893400752,LastTimestamp:2025-08-12 23:50:06.172254537 +0000 UTC m=+1.893400752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:50:11.166352 kubelet[1579]: I0812 23:50:11.166312 1579 apiserver.go:52] "Watching apiserver" Aug 12 23:50:11.182701 kubelet[1579]: I0812 23:50:11.182650 1579 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:50:13.244945 systemd[1]: Reloading. Aug 12 23:50:13.296728 /usr/lib/systemd/system-generators/torcx-generator[1875]: time="2025-08-12T23:50:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 12 23:50:13.296757 /usr/lib/systemd/system-generators/torcx-generator[1875]: time="2025-08-12T23:50:13Z" level=info msg="torcx already run" Aug 12 23:50:13.361293 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 12 23:50:13.361312 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 12 23:50:13.377310 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:50:13.462310 systemd[1]: Stopping kubelet.service... Aug 12 23:50:13.483890 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:50:13.484088 systemd[1]: Stopped kubelet.service. Aug 12 23:50:13.484162 systemd[1]: kubelet.service: Consumed 2.377s CPU time. Aug 12 23:50:13.486263 systemd[1]: Starting kubelet.service... Aug 12 23:50:13.586162 systemd[1]: Started kubelet.service. Aug 12 23:50:13.627503 kubelet[1918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:50:13.627503 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:50:13.627503 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:50:13.627870 kubelet[1918]: I0812 23:50:13.627547 1918 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:50:13.633386 kubelet[1918]: I0812 23:50:13.633356 1918 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:50:13.633520 kubelet[1918]: I0812 23:50:13.633510 1918 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:50:13.633815 kubelet[1918]: I0812 23:50:13.633800 1918 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:50:13.636919 kubelet[1918]: I0812 23:50:13.636887 1918 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:50:13.638754 kubelet[1918]: I0812 23:50:13.638728 1918 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:50:13.641760 kubelet[1918]: E0812 23:50:13.641732 1918 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:50:13.641760 kubelet[1918]: I0812 23:50:13.641761 1918 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:50:13.644026 kubelet[1918]: I0812 23:50:13.644002 1918 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:50:13.644285 kubelet[1918]: I0812 23:50:13.644271 1918 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:50:13.644401 kubelet[1918]: I0812 23:50:13.644377 1918 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:50:13.644597 kubelet[1918]: I0812 23:50:13.644404 1918 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.644607 1918 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.644616 1918 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.644654 1918 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.644762 1918 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.644780 1918 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.644804 1918 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.644817 1918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:50:13.646999 kubelet[1918]: I0812 23:50:13.645386 1918 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 12 23:50:13.654716 kubelet[1918]: I0812 23:50:13.654666 1918 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:50:13.655211 kubelet[1918]: I0812 23:50:13.655179 1918 server.go:1274] "Started kubelet" Aug 12 23:50:13.659961 kubelet[1918]: I0812 23:50:13.659084 1918 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:50:13.659961 kubelet[1918]: I0812 23:50:13.659535 1918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:50:13.659961 kubelet[1918]: I0812 23:50:13.659822 1918 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:50:13.663725 kubelet[1918]: I0812 23:50:13.663702 1918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:50:13.664051 kubelet[1918]: I0812 23:50:13.664027 1918 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:50:13.666339 kubelet[1918]: I0812 23:50:13.665831 1918 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:50:13.667535 kubelet[1918]: I0812 23:50:13.667520 1918 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:50:13.668217 kubelet[1918]: I0812 23:50:13.668198 1918 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:50:13.668407 kubelet[1918]: I0812 23:50:13.668396 1918 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:50:13.670177 kubelet[1918]: I0812 23:50:13.670064 1918 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:50:13.670261 kubelet[1918]: I0812 23:50:13.670193 1918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:50:13.671223 kubelet[1918]: E0812 23:50:13.671200 1918 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:50:13.672147 kubelet[1918]: I0812 23:50:13.672119 1918 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:50:13.681191 kubelet[1918]: I0812 23:50:13.680826 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:50:13.682711 kubelet[1918]: I0812 23:50:13.682685 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:50:13.682711 kubelet[1918]: I0812 23:50:13.682710 1918 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:50:13.682830 kubelet[1918]: I0812 23:50:13.682727 1918 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:50:13.682830 kubelet[1918]: E0812 23:50:13.682784 1918 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:50:13.709049 kubelet[1918]: I0812 23:50:13.709025 1918 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:50:13.709220 kubelet[1918]: I0812 23:50:13.709206 1918 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:50:13.709283 kubelet[1918]: I0812 23:50:13.709275 1918 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:50:13.709501 kubelet[1918]: I0812 23:50:13.709489 1918 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:50:13.709600 kubelet[1918]: I0812 23:50:13.709574 1918 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:50:13.709651 kubelet[1918]: I0812 23:50:13.709643 1918 policy_none.go:49] "None policy: Start" Aug 12 23:50:13.710406 kubelet[1918]: I0812 23:50:13.710393 1918 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:50:13.710529 kubelet[1918]: I0812 23:50:13.710519 1918 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:50:13.710874 kubelet[1918]: I0812 23:50:13.710849 1918 state_mem.go:75] "Updated machine memory state" Aug 12 23:50:13.716980 kubelet[1918]: I0812 23:50:13.716948 1918 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:50:13.717154 kubelet[1918]: I0812 23:50:13.717137 1918 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:50:13.718070 kubelet[1918]: I0812 23:50:13.717152 1918 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:50:13.718310 kubelet[1918]: I0812 23:50:13.718287 1918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:50:13.828095 kubelet[1918]: I0812 23:50:13.828068 1918 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:50:13.838567 kubelet[1918]: I0812 23:50:13.838469 1918 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 12 23:50:13.838780 kubelet[1918]: I0812 23:50:13.838755 1918 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:50:13.870015 kubelet[1918]: I0812 23:50:13.869969 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8587ded5325d6ff523f4e0a699619b42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8587ded5325d6ff523f4e0a699619b42\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:50:13.870015 kubelet[1918]: I0812 23:50:13.870013 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:13.870191 kubelet[1918]: I0812 23:50:13.870032 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:13.870191 kubelet[1918]: I0812 23:50:13.870049 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:13.870191 kubelet[1918]: I0812 23:50:13.870070 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8587ded5325d6ff523f4e0a699619b42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8587ded5325d6ff523f4e0a699619b42\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:50:13.870191 kubelet[1918]: I0812 23:50:13.870089 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8587ded5325d6ff523f4e0a699619b42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8587ded5325d6ff523f4e0a699619b42\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:50:13.870191 kubelet[1918]: I0812 23:50:13.870105 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:13.870303 kubelet[1918]: I0812 23:50:13.870120 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:50:13.870303 kubelet[1918]: I0812 23:50:13.870136 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:50:14.096607 kubelet[1918]: E0812 23:50:14.096507 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:14.104041 kubelet[1918]: E0812 23:50:14.104001 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:14.104215 kubelet[1918]: E0812 23:50:14.104195 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:14.246575 sudo[1954]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 12 23:50:14.246794 sudo[1954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 12 23:50:14.645925 kubelet[1918]: I0812 23:50:14.645888 1918 apiserver.go:52] "Watching apiserver" Aug 12 23:50:14.668723 kubelet[1918]: I0812 23:50:14.668694 1918 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:50:14.696254 kubelet[1918]: E0812 23:50:14.696229 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:14.696711 kubelet[1918]: E0812 23:50:14.696696 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:14.696958 kubelet[1918]: E0812 23:50:14.696940 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:14.704635 sudo[1954]: pam_unix(sudo:session): session closed for user root Aug 12 23:50:14.815287 kubelet[1918]: I0812 23:50:14.815220 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8152019099999999 podStartE2EDuration="1.81520191s" podCreationTimestamp="2025-08-12 23:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:50:14.779905752 +0000 UTC m=+1.189199202" watchObservedRunningTime="2025-08-12 23:50:14.81520191 +0000 UTC m=+1.224495360" Aug 12 23:50:14.816932 kubelet[1918]: I0812 23:50:14.816885 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8167903619999999 podStartE2EDuration="1.816790362s" podCreationTimestamp="2025-08-12 23:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:50:14.815863432 +0000 UTC m=+1.225156882" watchObservedRunningTime="2025-08-12 23:50:14.816790362 +0000 UTC m=+1.226083812" Aug 12 23:50:14.865926 kubelet[1918]: I0812 23:50:14.865862 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.865843725 podStartE2EDuration="1.865843725s" podCreationTimestamp="2025-08-12 23:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:50:14.843100191 +0000 UTC m=+1.252393601" watchObservedRunningTime="2025-08-12 23:50:14.865843725 +0000 UTC m=+1.275137175" Aug 12 23:50:15.697705 kubelet[1918]: E0812 23:50:15.697674 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:16.698825 kubelet[1918]: E0812 23:50:16.698791 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:17.745549 sudo[1316]: pam_unix(sudo:session): session closed for user root Aug 12 23:50:17.747250 sshd[1312]: pam_unix(sshd:session): session closed for user core Aug 12 23:50:17.750855 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:36090.service: Deactivated successfully. Aug 12 23:50:17.751757 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:50:17.751942 systemd[1]: session-5.scope: Consumed 8.212s CPU time. Aug 12 23:50:17.752304 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:50:17.753947 systemd-logind[1203]: Removed session 5. Aug 12 23:50:19.324690 kubelet[1918]: E0812 23:50:19.324638 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:19.483308 kubelet[1918]: E0812 23:50:19.483274 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:19.703744 kubelet[1918]: E0812 23:50:19.703639 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:19.703859 kubelet[1918]: E0812 23:50:19.703771 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:20.510223 kubelet[1918]: I0812 23:50:20.510185 1918 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:50:20.510794 env[1213]: time="2025-08-12T23:50:20.510756141Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:50:20.511256 kubelet[1918]: I0812 23:50:20.511234 1918 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:50:21.405326 systemd[1]: Created slice kubepods-besteffort-pod12e90324_891c_46a6_be72_fe84f6aea63c.slice. Aug 12 23:50:21.414659 systemd[1]: Created slice kubepods-burstable-podf32eb236_8db0_4193_ac1f_f3237824458e.slice. Aug 12 23:50:21.423794 kubelet[1918]: I0812 23:50:21.423745 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-hubble-tls\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.423794 kubelet[1918]: I0812 23:50:21.423795 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12e90324-891c-46a6-be72-fe84f6aea63c-xtables-lock\") pod \"kube-proxy-s9b59\" (UID: \"12e90324-891c-46a6-be72-fe84f6aea63c\") " pod="kube-system/kube-proxy-s9b59" Aug 12 23:50:21.423990 kubelet[1918]: I0812 23:50:21.423816 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-bpf-maps\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.423990 kubelet[1918]: I0812 23:50:21.423831 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-etc-cni-netd\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.423990 kubelet[1918]: I0812 23:50:21.423846 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-lib-modules\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.423990 kubelet[1918]: I0812 23:50:21.423862 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-kernel\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.423990 kubelet[1918]: I0812 23:50:21.423877 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12e90324-891c-46a6-be72-fe84f6aea63c-kube-proxy\") pod \"kube-proxy-s9b59\" (UID: \"12e90324-891c-46a6-be72-fe84f6aea63c\") " pod="kube-system/kube-proxy-s9b59" Aug 12 23:50:21.423990 kubelet[1918]: I0812 23:50:21.423891 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-hostproc\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424136 kubelet[1918]: I0812 23:50:21.423907 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-cgroup\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424136 kubelet[1918]: I0812 23:50:21.423932 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-net\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424136 kubelet[1918]: I0812 23:50:21.423949 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-run\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424136 kubelet[1918]: I0812 23:50:21.423964 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cni-path\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424136 kubelet[1918]: I0812 23:50:21.423977 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-xtables-lock\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424136 kubelet[1918]: I0812 23:50:21.423991 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f32eb236-8db0-4193-ac1f-f3237824458e-clustermesh-secrets\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424275 kubelet[1918]: I0812 23:50:21.424017 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12e90324-891c-46a6-be72-fe84f6aea63c-lib-modules\") pod \"kube-proxy-s9b59\" (UID: \"12e90324-891c-46a6-be72-fe84f6aea63c\") " pod="kube-system/kube-proxy-s9b59" Aug 12 23:50:21.424275 kubelet[1918]: I0812 23:50:21.424031 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-config-path\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.424275 kubelet[1918]: I0812 23:50:21.424047 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j54j\" (UniqueName: \"kubernetes.io/projected/12e90324-891c-46a6-be72-fe84f6aea63c-kube-api-access-6j54j\") pod \"kube-proxy-s9b59\" (UID: \"12e90324-891c-46a6-be72-fe84f6aea63c\") " pod="kube-system/kube-proxy-s9b59" Aug 12 23:50:21.424275 kubelet[1918]: I0812 23:50:21.424068 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qfpw\" (UniqueName: \"kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-kube-api-access-8qfpw\") pod \"cilium-v4sd6\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " pod="kube-system/cilium-v4sd6" Aug 12 23:50:21.527794 kubelet[1918]: I0812 23:50:21.527754 1918 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 12 23:50:21.578220 systemd[1]: Created slice kubepods-besteffort-pod9a6f12de_7b5d_4d4f_99d6_9b0a948a3a03.slice. Aug 12 23:50:21.626295 kubelet[1918]: I0812 23:50:21.626250 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns74h\" (UniqueName: \"kubernetes.io/projected/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-kube-api-access-ns74h\") pod \"cilium-operator-5d85765b45-t4h25\" (UID: \"9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03\") " pod="kube-system/cilium-operator-5d85765b45-t4h25" Aug 12 23:50:21.626448 kubelet[1918]: I0812 23:50:21.626338 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-cilium-config-path\") pod \"cilium-operator-5d85765b45-t4h25\" (UID: \"9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03\") " pod="kube-system/cilium-operator-5d85765b45-t4h25" Aug 12 23:50:21.713064 kubelet[1918]: E0812 23:50:21.712946 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:21.713596 env[1213]: time="2025-08-12T23:50:21.713540392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9b59,Uid:12e90324-891c-46a6-be72-fe84f6aea63c,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:21.718024 kubelet[1918]: E0812 23:50:21.717996 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:21.718719 env[1213]: time="2025-08-12T23:50:21.718497210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v4sd6,Uid:f32eb236-8db0-4193-ac1f-f3237824458e,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:21.767756 env[1213]: time="2025-08-12T23:50:21.767685103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:21.767756 env[1213]: time="2025-08-12T23:50:21.767723068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:21.767756 env[1213]: time="2025-08-12T23:50:21.767733189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:21.768171 env[1213]: time="2025-08-12T23:50:21.768131919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5a8121cc1c5a4e7965e274e75c7f43ba698f177d75e813828c844ef1c1410eb pid=2018 runtime=io.containerd.runc.v2 Aug 12 23:50:21.768298 env[1213]: time="2025-08-12T23:50:21.768208048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:21.768298 env[1213]: time="2025-08-12T23:50:21.768290619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:21.768361 env[1213]: time="2025-08-12T23:50:21.768303140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:21.768624 env[1213]: time="2025-08-12T23:50:21.768582615Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f pid=2022 runtime=io.containerd.runc.v2 Aug 12 23:50:21.783259 systemd[1]: Started cri-containerd-f5a8121cc1c5a4e7965e274e75c7f43ba698f177d75e813828c844ef1c1410eb.scope. Aug 12 23:50:21.786753 systemd[1]: Started cri-containerd-b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f.scope. Aug 12 23:50:21.836500 env[1213]: time="2025-08-12T23:50:21.836447797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9b59,Uid:12e90324-891c-46a6-be72-fe84f6aea63c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5a8121cc1c5a4e7965e274e75c7f43ba698f177d75e813828c844ef1c1410eb\"" Aug 12 23:50:21.837450 kubelet[1918]: E0812 23:50:21.837407 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:21.843988 env[1213]: time="2025-08-12T23:50:21.843398103Z" level=info msg="CreateContainer within sandbox \"f5a8121cc1c5a4e7965e274e75c7f43ba698f177d75e813828c844ef1c1410eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:50:21.856230 env[1213]: time="2025-08-12T23:50:21.856184177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v4sd6,Uid:f32eb236-8db0-4193-ac1f-f3237824458e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\"" Aug 12 23:50:21.857437 kubelet[1918]: E0812 23:50:21.857394 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:21.858913 env[1213]: time="2025-08-12T23:50:21.858862111Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 12 23:50:21.862604 env[1213]: time="2025-08-12T23:50:21.862563733Z" level=info msg="CreateContainer within sandbox \"f5a8121cc1c5a4e7965e274e75c7f43ba698f177d75e813828c844ef1c1410eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"40b20b3c2f015279bc58b75424ac6174d22faa8daa5e51cab0d1bf3317937ea9\"" Aug 12 23:50:21.864112 env[1213]: time="2025-08-12T23:50:21.863602182Z" level=info msg="StartContainer for \"40b20b3c2f015279bc58b75424ac6174d22faa8daa5e51cab0d1bf3317937ea9\"" Aug 12 23:50:21.883079 systemd[1]: Started cri-containerd-40b20b3c2f015279bc58b75424ac6174d22faa8daa5e51cab0d1bf3317937ea9.scope. Aug 12 23:50:21.883441 kubelet[1918]: E0812 23:50:21.883405 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:21.885505 env[1213]: time="2025-08-12T23:50:21.884054812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t4h25,Uid:9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:21.908772 env[1213]: time="2025-08-12T23:50:21.908683723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:21.908896 env[1213]: time="2025-08-12T23:50:21.908789136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:21.908896 env[1213]: time="2025-08-12T23:50:21.908820660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:21.911094 env[1213]: time="2025-08-12T23:50:21.909419175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d pid=2119 runtime=io.containerd.runc.v2 Aug 12 23:50:21.929795 systemd[1]: Started cri-containerd-8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d.scope. Aug 12 23:50:21.957529 env[1213]: time="2025-08-12T23:50:21.955128954Z" level=info msg="StartContainer for \"40b20b3c2f015279bc58b75424ac6174d22faa8daa5e51cab0d1bf3317937ea9\" returns successfully" Aug 12 23:50:21.991248 env[1213]: time="2025-08-12T23:50:21.991098959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-t4h25,Uid:9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d\"" Aug 12 23:50:21.993360 kubelet[1918]: E0812 23:50:21.991793 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:22.711386 kubelet[1918]: E0812 23:50:22.711335 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:22.720815 kubelet[1918]: I0812 23:50:22.720760 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s9b59" podStartSLOduration=1.720743972 podStartE2EDuration="1.720743972s" podCreationTimestamp="2025-08-12 23:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:50:22.720363527 +0000 UTC m=+9.129656977" watchObservedRunningTime="2025-08-12 23:50:22.720743972 +0000 UTC m=+9.130037422" Aug 12 23:50:25.397260 kubelet[1918]: E0812 23:50:25.397211 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:25.723007 kubelet[1918]: E0812 23:50:25.722852 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:25.967524 update_engine[1207]: I0812 23:50:25.967474 1207 update_attempter.cc:509] Updating boot flags... Aug 12 23:50:26.235334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188457792.mount: Deactivated successfully. Aug 12 23:50:28.545199 env[1213]: time="2025-08-12T23:50:28.545150384Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:28.546502 env[1213]: time="2025-08-12T23:50:28.546471979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:28.548111 env[1213]: time="2025-08-12T23:50:28.548085360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:28.548635 env[1213]: time="2025-08-12T23:50:28.548602485Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 12 23:50:28.551033 env[1213]: time="2025-08-12T23:50:28.550972332Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 12 23:50:28.553231 env[1213]: time="2025-08-12T23:50:28.553199606Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:50:28.568493 env[1213]: time="2025-08-12T23:50:28.568432094Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\"" Aug 12 23:50:28.570774 env[1213]: time="2025-08-12T23:50:28.570738415Z" level=info msg="StartContainer for \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\"" Aug 12 23:50:28.595616 systemd[1]: Started cri-containerd-8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc.scope. Aug 12 23:50:28.690317 systemd[1]: cri-containerd-8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc.scope: Deactivated successfully. Aug 12 23:50:28.723885 env[1213]: time="2025-08-12T23:50:28.723594226Z" level=info msg="StartContainer for \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\" returns successfully" Aug 12 23:50:28.748811 env[1213]: time="2025-08-12T23:50:28.748758101Z" level=info msg="shim disconnected" id=8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc Aug 12 23:50:28.748811 env[1213]: time="2025-08-12T23:50:28.748797384Z" level=warning msg="cleaning up after shim disconnected" id=8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc namespace=k8s.io Aug 12 23:50:28.748811 env[1213]: time="2025-08-12T23:50:28.748806305Z" level=info msg="cleaning up dead shim" Aug 12 23:50:28.749042 kubelet[1918]: E0812 23:50:28.748919 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:28.759814 env[1213]: time="2025-08-12T23:50:28.759747820Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:50:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2361 runtime=io.containerd.runc.v2\n" Aug 12 23:50:29.565440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc-rootfs.mount: Deactivated successfully. Aug 12 23:50:29.654174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314430824.mount: Deactivated successfully. Aug 12 23:50:29.752586 kubelet[1918]: E0812 23:50:29.752549 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:29.758638 env[1213]: time="2025-08-12T23:50:29.758589783Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:50:29.776661 env[1213]: time="2025-08-12T23:50:29.776600520Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\"" Aug 12 23:50:29.778939 env[1213]: time="2025-08-12T23:50:29.778909872Z" level=info msg="StartContainer for \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\"" Aug 12 23:50:29.797785 systemd[1]: Started cri-containerd-073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e.scope. Aug 12 23:50:29.837691 env[1213]: time="2025-08-12T23:50:29.837580468Z" level=info msg="StartContainer for \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\" returns successfully" Aug 12 23:50:29.860340 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:50:29.860614 systemd[1]: Stopped systemd-sysctl.service. Aug 12 23:50:29.860806 systemd[1]: Stopping systemd-sysctl.service... Aug 12 23:50:29.862537 systemd[1]: Starting systemd-sysctl.service... Aug 12 23:50:29.864912 systemd[1]: cri-containerd-073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e.scope: Deactivated successfully. Aug 12 23:50:29.871854 systemd[1]: Finished systemd-sysctl.service. Aug 12 23:50:29.907822 env[1213]: time="2025-08-12T23:50:29.907774422Z" level=info msg="shim disconnected" id=073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e Aug 12 23:50:29.907822 env[1213]: time="2025-08-12T23:50:29.907821666Z" level=warning msg="cleaning up after shim disconnected" id=073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e namespace=k8s.io Aug 12 23:50:29.908057 env[1213]: time="2025-08-12T23:50:29.907832027Z" level=info msg="cleaning up dead shim" Aug 12 23:50:29.914492 env[1213]: time="2025-08-12T23:50:29.914444656Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:50:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2425 runtime=io.containerd.runc.v2\n" Aug 12 23:50:30.505097 env[1213]: time="2025-08-12T23:50:30.505047123Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:30.507167 env[1213]: time="2025-08-12T23:50:30.507131168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:30.508473 env[1213]: time="2025-08-12T23:50:30.508447832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 12 23:50:30.508889 env[1213]: time="2025-08-12T23:50:30.508853225Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 12 23:50:30.511589 env[1213]: time="2025-08-12T23:50:30.511556919Z" level=info msg="CreateContainer within sandbox \"8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 12 23:50:30.522358 env[1213]: time="2025-08-12T23:50:30.522296850Z" level=info msg="CreateContainer within sandbox \"8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\"" Aug 12 23:50:30.522907 env[1213]: time="2025-08-12T23:50:30.522818532Z" level=info msg="StartContainer for \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\"" Aug 12 23:50:30.537301 systemd[1]: Started cri-containerd-fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950.scope. Aug 12 23:50:30.584463 env[1213]: time="2025-08-12T23:50:30.584384772Z" level=info msg="StartContainer for \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\" returns successfully" Aug 12 23:50:30.756257 kubelet[1918]: E0812 23:50:30.755875 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:30.758075 kubelet[1918]: E0812 23:50:30.757901 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:30.759753 env[1213]: time="2025-08-12T23:50:30.759689108Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:50:30.807481 kubelet[1918]: I0812 23:50:30.807406 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-t4h25" podStartSLOduration=1.289937717 podStartE2EDuration="9.807378288s" podCreationTimestamp="2025-08-12 23:50:21 +0000 UTC" firstStartedPulling="2025-08-12 23:50:21.992622509 +0000 UTC m=+8.401915959" lastFinishedPulling="2025-08-12 23:50:30.51006308 +0000 UTC m=+16.919356530" observedRunningTime="2025-08-12 23:50:30.807097906 +0000 UTC m=+17.216391356" watchObservedRunningTime="2025-08-12 23:50:30.807378288 +0000 UTC m=+17.216671738" Aug 12 23:50:30.824978 env[1213]: time="2025-08-12T23:50:30.824884156Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\"" Aug 12 23:50:30.825603 env[1213]: time="2025-08-12T23:50:30.825573570Z" level=info msg="StartContainer for \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\"" Aug 12 23:50:30.845675 systemd[1]: Started cri-containerd-96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f.scope. Aug 12 23:50:30.907115 env[1213]: time="2025-08-12T23:50:30.907041068Z" level=info msg="StartContainer for \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\" returns successfully" Aug 12 23:50:30.916939 systemd[1]: cri-containerd-96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f.scope: Deactivated successfully. Aug 12 23:50:31.019492 env[1213]: time="2025-08-12T23:50:31.019367264Z" level=info msg="shim disconnected" id=96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f Aug 12 23:50:31.019492 env[1213]: time="2025-08-12T23:50:31.019411748Z" level=warning msg="cleaning up after shim disconnected" id=96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f namespace=k8s.io Aug 12 23:50:31.019492 env[1213]: time="2025-08-12T23:50:31.019441270Z" level=info msg="cleaning up dead shim" Aug 12 23:50:31.030829 env[1213]: time="2025-08-12T23:50:31.030776208Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:50:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2522 runtime=io.containerd.runc.v2\n" Aug 12 23:50:31.565635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f-rootfs.mount: Deactivated successfully. Aug 12 23:50:31.762048 kubelet[1918]: E0812 23:50:31.761982 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:31.764054 kubelet[1918]: E0812 23:50:31.763601 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:31.764317 env[1213]: time="2025-08-12T23:50:31.764268266Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:50:31.783727 env[1213]: time="2025-08-12T23:50:31.783673415Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\"" Aug 12 23:50:31.784518 env[1213]: time="2025-08-12T23:50:31.784485156Z" level=info msg="StartContainer for \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\"" Aug 12 23:50:31.809756 systemd[1]: Started cri-containerd-8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece.scope. Aug 12 23:50:31.941840 systemd[1]: cri-containerd-8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece.scope: Deactivated successfully. Aug 12 23:50:31.942229 env[1213]: time="2025-08-12T23:50:31.941917668Z" level=info msg="StartContainer for \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\" returns successfully" Aug 12 23:50:31.973259 env[1213]: time="2025-08-12T23:50:31.973208116Z" level=info msg="shim disconnected" id=8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece Aug 12 23:50:31.973259 env[1213]: time="2025-08-12T23:50:31.973256879Z" level=warning msg="cleaning up after shim disconnected" id=8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece namespace=k8s.io Aug 12 23:50:31.973259 env[1213]: time="2025-08-12T23:50:31.973266440Z" level=info msg="cleaning up dead shim" Aug 12 23:50:31.980895 env[1213]: time="2025-08-12T23:50:31.980841813Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:50:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Aug 12 23:50:32.565845 systemd[1]: run-containerd-runc-k8s.io-8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece-runc.dxPoKG.mount: Deactivated successfully. Aug 12 23:50:32.565961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece-rootfs.mount: Deactivated successfully. Aug 12 23:50:32.770260 kubelet[1918]: E0812 23:50:32.770178 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:32.772096 env[1213]: time="2025-08-12T23:50:32.772037391Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:50:32.800255 env[1213]: time="2025-08-12T23:50:32.800175465Z" level=info msg="CreateContainer within sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\"" Aug 12 23:50:32.801466 env[1213]: time="2025-08-12T23:50:32.801075050Z" level=info msg="StartContainer for \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\"" Aug 12 23:50:32.831749 systemd[1]: Started cri-containerd-98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234.scope. Aug 12 23:50:32.954217 env[1213]: time="2025-08-12T23:50:32.954153595Z" level=info msg="StartContainer for \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\" returns successfully" Aug 12 23:50:33.126191 kubelet[1918]: I0812 23:50:33.123804 1918 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 12 23:50:33.202571 systemd[1]: Created slice kubepods-burstable-pod68af242d_ebec_4a3d_aed7_060600776e35.slice. Aug 12 23:50:33.206623 systemd[1]: Created slice kubepods-burstable-pod7ff450ac_04a0_4c77_a5e5_06866f44d359.slice. Aug 12 23:50:33.216760 kubelet[1918]: I0812 23:50:33.216713 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68af242d-ebec-4a3d-aed7-060600776e35-config-volume\") pod \"coredns-7c65d6cfc9-f8pgb\" (UID: \"68af242d-ebec-4a3d-aed7-060600776e35\") " pod="kube-system/coredns-7c65d6cfc9-f8pgb" Aug 12 23:50:33.216760 kubelet[1918]: I0812 23:50:33.216754 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gggn\" (UniqueName: \"kubernetes.io/projected/7ff450ac-04a0-4c77-a5e5-06866f44d359-kube-api-access-6gggn\") pod \"coredns-7c65d6cfc9-7cskh\" (UID: \"7ff450ac-04a0-4c77-a5e5-06866f44d359\") " pod="kube-system/coredns-7c65d6cfc9-7cskh" Aug 12 23:50:33.216962 kubelet[1918]: I0812 23:50:33.216780 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslgn\" (UniqueName: \"kubernetes.io/projected/68af242d-ebec-4a3d-aed7-060600776e35-kube-api-access-lslgn\") pod \"coredns-7c65d6cfc9-f8pgb\" (UID: \"68af242d-ebec-4a3d-aed7-060600776e35\") " pod="kube-system/coredns-7c65d6cfc9-f8pgb" Aug 12 23:50:33.216962 kubelet[1918]: I0812 23:50:33.216799 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ff450ac-04a0-4c77-a5e5-06866f44d359-config-volume\") pod \"coredns-7c65d6cfc9-7cskh\" (UID: \"7ff450ac-04a0-4c77-a5e5-06866f44d359\") " pod="kube-system/coredns-7c65d6cfc9-7cskh" Aug 12 23:50:33.507755 kubelet[1918]: E0812 23:50:33.507634 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:33.509469 kubelet[1918]: E0812 23:50:33.509436 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:33.511900 env[1213]: time="2025-08-12T23:50:33.511850528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7cskh,Uid:7ff450ac-04a0-4c77-a5e5-06866f44d359,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:33.512319 env[1213]: time="2025-08-12T23:50:33.511898291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f8pgb,Uid:68af242d-ebec-4a3d-aed7-060600776e35,Namespace:kube-system,Attempt:0,}" Aug 12 23:50:33.724492 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 12 23:50:33.773649 kubelet[1918]: E0812 23:50:33.773400 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:33.796513 kubelet[1918]: I0812 23:50:33.796337 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v4sd6" podStartSLOduration=6.103836876 podStartE2EDuration="12.79631795s" podCreationTimestamp="2025-08-12 23:50:21 +0000 UTC" firstStartedPulling="2025-08-12 23:50:21.858284639 +0000 UTC m=+8.267578089" lastFinishedPulling="2025-08-12 23:50:28.550765673 +0000 UTC m=+14.960059163" observedRunningTime="2025-08-12 23:50:33.795994127 +0000 UTC m=+20.205287577" watchObservedRunningTime="2025-08-12 23:50:33.79631795 +0000 UTC m=+20.205611400" Aug 12 23:50:34.064459 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 12 23:50:34.775511 kubelet[1918]: E0812 23:50:34.775480 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:35.709645 systemd-networkd[1037]: cilium_host: Link UP Aug 12 23:50:35.711034 systemd-networkd[1037]: cilium_net: Link UP Aug 12 23:50:35.711247 systemd-networkd[1037]: cilium_net: Gained carrier Aug 12 23:50:35.712201 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 12 23:50:35.712275 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 12 23:50:35.712018 systemd-networkd[1037]: cilium_host: Gained carrier Aug 12 23:50:35.776780 kubelet[1918]: E0812 23:50:35.776695 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:35.849824 systemd-networkd[1037]: cilium_vxlan: Link UP Aug 12 23:50:35.849832 systemd-networkd[1037]: cilium_vxlan: Gained carrier Aug 12 23:50:36.074867 systemd-networkd[1037]: cilium_host: Gained IPv6LL Aug 12 23:50:36.309457 kernel: NET: Registered PF_ALG protocol family Aug 12 23:50:36.314664 systemd-networkd[1037]: cilium_net: Gained IPv6LL Aug 12 23:50:36.891620 systemd-networkd[1037]: cilium_vxlan: Gained IPv6LL Aug 12 23:50:36.979934 systemd-networkd[1037]: lxc_health: Link UP Aug 12 23:50:37.003449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 12 23:50:37.003452 systemd-networkd[1037]: lxc_health: Gained carrier Aug 12 23:50:37.202793 systemd-networkd[1037]: lxc9ac31d1f48cb: Link UP Aug 12 23:50:37.245129 systemd-networkd[1037]: lxcde35b8c4d357: Link UP Aug 12 23:50:37.246456 kernel: eth0: renamed from tmpf6a99 Aug 12 23:50:37.254482 kernel: eth0: renamed from tmp27ba6 Aug 12 23:50:37.264956 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcde35b8c4d357: link becomes ready Aug 12 23:50:37.265079 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9ac31d1f48cb: link becomes ready Aug 12 23:50:37.261099 systemd-networkd[1037]: lxcde35b8c4d357: Gained carrier Aug 12 23:50:37.264582 systemd-networkd[1037]: lxc9ac31d1f48cb: Gained carrier Aug 12 23:50:37.722702 kubelet[1918]: E0812 23:50:37.722611 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:37.780639 kubelet[1918]: E0812 23:50:37.780609 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:38.235649 systemd-networkd[1037]: lxc_health: Gained IPv6LL Aug 12 23:50:38.782262 kubelet[1918]: E0812 23:50:38.782205 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:39.066589 systemd-networkd[1037]: lxcde35b8c4d357: Gained IPv6LL Aug 12 23:50:39.258564 systemd-networkd[1037]: lxc9ac31d1f48cb: Gained IPv6LL Aug 12 23:50:41.124512 env[1213]: time="2025-08-12T23:50:41.124390707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:41.124512 env[1213]: time="2025-08-12T23:50:41.124470231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:41.124512 env[1213]: time="2025-08-12T23:50:41.124482031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:41.126158 env[1213]: time="2025-08-12T23:50:41.125767095Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27ba610fb994dd67aac6eefc26fb48b5b2e406cce619530c1d74fd4bb75dbde7 pid=3142 runtime=io.containerd.runc.v2 Aug 12 23:50:41.134108 env[1213]: time="2025-08-12T23:50:41.132358505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:50:41.134108 env[1213]: time="2025-08-12T23:50:41.132402307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:50:41.134108 env[1213]: time="2025-08-12T23:50:41.132417588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:50:41.134108 env[1213]: time="2025-08-12T23:50:41.132561515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6a9913597cd19248d5d032a0871ca80e6847d56cf1f3979bc6f887193fc0a0a pid=3157 runtime=io.containerd.runc.v2 Aug 12 23:50:41.162887 systemd[1]: Started cri-containerd-27ba610fb994dd67aac6eefc26fb48b5b2e406cce619530c1d74fd4bb75dbde7.scope. Aug 12 23:50:41.166744 systemd[1]: Started cri-containerd-f6a9913597cd19248d5d032a0871ca80e6847d56cf1f3979bc6f887193fc0a0a.scope. Aug 12 23:50:41.276689 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:50:41.285012 systemd-resolved[1152]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:50:41.296132 env[1213]: time="2025-08-12T23:50:41.296085563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7cskh,Uid:7ff450ac-04a0-4c77-a5e5-06866f44d359,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6a9913597cd19248d5d032a0871ca80e6847d56cf1f3979bc6f887193fc0a0a\"" Aug 12 23:50:41.297160 kubelet[1918]: E0812 23:50:41.297137 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:41.300312 env[1213]: time="2025-08-12T23:50:41.299728425Z" level=info msg="CreateContainer within sandbox \"f6a9913597cd19248d5d032a0871ca80e6847d56cf1f3979bc6f887193fc0a0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:50:41.315824 env[1213]: time="2025-08-12T23:50:41.315785667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f8pgb,Uid:68af242d-ebec-4a3d-aed7-060600776e35,Namespace:kube-system,Attempt:0,} returns sandbox id \"27ba610fb994dd67aac6eefc26fb48b5b2e406cce619530c1d74fd4bb75dbde7\"" Aug 12 23:50:41.317643 kubelet[1918]: E0812 23:50:41.317614 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:41.319874 env[1213]: time="2025-08-12T23:50:41.319831110Z" level=info msg="CreateContainer within sandbox \"27ba610fb994dd67aac6eefc26fb48b5b2e406cce619530c1d74fd4bb75dbde7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:50:41.321671 env[1213]: time="2025-08-12T23:50:41.321623279Z" level=info msg="CreateContainer within sandbox \"f6a9913597cd19248d5d032a0871ca80e6847d56cf1f3979bc6f887193fc0a0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52625e5e98f1e4cb73464bedc8e2288bfd626f3db766726c30c1853da4fe4022\"" Aug 12 23:50:41.322291 env[1213]: time="2025-08-12T23:50:41.322268751Z" level=info msg="StartContainer for \"52625e5e98f1e4cb73464bedc8e2288bfd626f3db766726c30c1853da4fe4022\"" Aug 12 23:50:41.339549 env[1213]: time="2025-08-12T23:50:41.339501812Z" level=info msg="CreateContainer within sandbox \"27ba610fb994dd67aac6eefc26fb48b5b2e406cce619530c1d74fd4bb75dbde7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d26e72d2ef2c8da2e19a5f59cca37f36f4c074a189ae6bc3d4aadc650533aa33\"" Aug 12 23:50:41.340895 env[1213]: time="2025-08-12T23:50:41.340854880Z" level=info msg="StartContainer for \"d26e72d2ef2c8da2e19a5f59cca37f36f4c074a189ae6bc3d4aadc650533aa33\"" Aug 12 23:50:41.346728 systemd[1]: Started cri-containerd-52625e5e98f1e4cb73464bedc8e2288bfd626f3db766726c30c1853da4fe4022.scope. Aug 12 23:50:41.369079 systemd[1]: Started cri-containerd-d26e72d2ef2c8da2e19a5f59cca37f36f4c074a189ae6bc3d4aadc650533aa33.scope. Aug 12 23:50:41.431071 env[1213]: time="2025-08-12T23:50:41.430880257Z" level=info msg="StartContainer for \"52625e5e98f1e4cb73464bedc8e2288bfd626f3db766726c30c1853da4fe4022\" returns successfully" Aug 12 23:50:41.437363 env[1213]: time="2025-08-12T23:50:41.437308618Z" level=info msg="StartContainer for \"d26e72d2ef2c8da2e19a5f59cca37f36f4c074a189ae6bc3d4aadc650533aa33\" returns successfully" Aug 12 23:50:41.788436 kubelet[1918]: E0812 23:50:41.788075 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:41.789671 kubelet[1918]: E0812 23:50:41.789638 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:41.802347 kubelet[1918]: I0812 23:50:41.802269 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f8pgb" podStartSLOduration=20.802243968 podStartE2EDuration="20.802243968s" podCreationTimestamp="2025-08-12 23:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:50:41.801601496 +0000 UTC m=+28.210894946" watchObservedRunningTime="2025-08-12 23:50:41.802243968 +0000 UTC m=+28.211537418" Aug 12 23:50:41.826558 kubelet[1918]: I0812 23:50:41.826484 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7cskh" podStartSLOduration=20.826454817 podStartE2EDuration="20.826454817s" podCreationTimestamp="2025-08-12 23:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:50:41.826033476 +0000 UTC m=+28.235326926" watchObservedRunningTime="2025-08-12 23:50:41.826454817 +0000 UTC m=+28.235748267" Aug 12 23:50:42.792857 kubelet[1918]: E0812 23:50:42.792809 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:42.793545 kubelet[1918]: E0812 23:50:42.793506 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:43.586067 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:59164.service. Aug 12 23:50:43.632464 sshd[3304]: Accepted publickey for core from 10.0.0.1 port 59164 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:50:43.634010 sshd[3304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:50:43.637870 systemd-logind[1203]: New session 6 of user core. Aug 12 23:50:43.638804 systemd[1]: Started session-6.scope. Aug 12 23:50:43.786502 sshd[3304]: pam_unix(sshd:session): session closed for user core Aug 12 23:50:43.790654 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:59164.service: Deactivated successfully. Aug 12 23:50:43.791608 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:50:43.792175 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:50:43.792935 systemd-logind[1203]: Removed session 6. Aug 12 23:50:43.793892 kubelet[1918]: E0812 23:50:43.793864 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:43.794588 kubelet[1918]: E0812 23:50:43.794539 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:50:48.791708 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:59246.service. Aug 12 23:50:48.842636 sshd[3320]: Accepted publickey for core from 10.0.0.1 port 59246 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:50:48.843974 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:50:48.847885 systemd-logind[1203]: New session 7 of user core. Aug 12 23:50:48.848854 systemd[1]: Started session-7.scope. Aug 12 23:50:48.975955 sshd[3320]: pam_unix(sshd:session): session closed for user core Aug 12 23:50:48.979317 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:59246.service: Deactivated successfully. Aug 12 23:50:48.980255 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:50:48.980840 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:50:48.981512 systemd-logind[1203]: Removed session 7. Aug 12 23:50:53.981001 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:55270.service. Aug 12 23:50:54.026328 sshd[3339]: Accepted publickey for core from 10.0.0.1 port 55270 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:50:54.028211 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:50:54.033195 systemd-logind[1203]: New session 8 of user core. Aug 12 23:50:54.033846 systemd[1]: Started session-8.scope. Aug 12 23:50:54.162567 sshd[3339]: pam_unix(sshd:session): session closed for user core Aug 12 23:50:54.165195 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:55270.service: Deactivated successfully. Aug 12 23:50:54.165993 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:50:54.166656 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:50:54.167655 systemd-logind[1203]: Removed session 8. Aug 12 23:50:59.168315 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:55276.service. Aug 12 23:50:59.223595 sshd[3354]: Accepted publickey for core from 10.0.0.1 port 55276 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:50:59.225441 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:50:59.230860 systemd-logind[1203]: New session 9 of user core. Aug 12 23:50:59.231248 systemd[1]: Started session-9.scope. Aug 12 23:50:59.361713 sshd[3354]: pam_unix(sshd:session): session closed for user core Aug 12 23:50:59.365060 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:50:59.365688 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:50:59.365852 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:55276.service: Deactivated successfully. Aug 12 23:50:59.366954 systemd-logind[1203]: Removed session 9. Aug 12 23:51:04.366934 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:35400.service. Aug 12 23:51:04.413532 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 35400 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:04.415203 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:04.420081 systemd-logind[1203]: New session 10 of user core. Aug 12 23:51:04.420861 systemd[1]: Started session-10.scope. Aug 12 23:51:04.563065 sshd[3368]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:04.566751 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:35400.service: Deactivated successfully. Aug 12 23:51:04.567656 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:51:04.568264 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:51:04.570256 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:35416.service. Aug 12 23:51:04.571315 systemd-logind[1203]: Removed session 10. Aug 12 23:51:04.618837 sshd[3382]: Accepted publickey for core from 10.0.0.1 port 35416 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:04.623217 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:04.631714 systemd-logind[1203]: New session 11 of user core. Aug 12 23:51:04.632660 systemd[1]: Started session-11.scope. Aug 12 23:51:04.851410 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:35418.service. Aug 12 23:51:04.852642 sshd[3382]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:04.857163 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:51:04.858072 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:35416.service: Deactivated successfully. Aug 12 23:51:04.860529 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:51:04.862220 systemd-logind[1203]: Removed session 11. Aug 12 23:51:04.914395 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 35418 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:04.916719 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:04.922808 systemd-logind[1203]: New session 12 of user core. Aug 12 23:51:04.923224 systemd[1]: Started session-12.scope. Aug 12 23:51:05.073908 sshd[3392]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:05.076783 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:51:05.077014 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:35418.service: Deactivated successfully. Aug 12 23:51:05.077915 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:51:05.078545 systemd-logind[1203]: Removed session 12. Aug 12 23:51:10.079150 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:35434.service. Aug 12 23:51:10.122538 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 35434 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:10.123855 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:10.127686 systemd-logind[1203]: New session 13 of user core. Aug 12 23:51:10.128777 systemd[1]: Started session-13.scope. Aug 12 23:51:10.248398 sshd[3406]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:10.251415 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:51:10.251658 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:35434.service: Deactivated successfully. Aug 12 23:51:10.252353 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:51:10.253052 systemd-logind[1203]: Removed session 13. Aug 12 23:51:15.259530 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:53484.service. Aug 12 23:51:15.317038 sshd[3422]: Accepted publickey for core from 10.0.0.1 port 53484 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:15.319158 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:15.324331 systemd-logind[1203]: New session 14 of user core. Aug 12 23:51:15.326405 systemd[1]: Started session-14.scope. Aug 12 23:51:15.451833 sshd[3422]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:15.462534 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:51:15.462670 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:53484.service: Deactivated successfully. Aug 12 23:51:15.463405 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:51:15.468943 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:53494.service. Aug 12 23:51:15.469504 systemd-logind[1203]: Removed session 14. Aug 12 23:51:15.518122 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 53494 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:15.520971 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:15.528451 systemd[1]: Started session-15.scope. Aug 12 23:51:15.528933 systemd-logind[1203]: New session 15 of user core. Aug 12 23:51:15.791921 sshd[3435]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:15.799045 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:53504.service. Aug 12 23:51:15.800328 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:53494.service: Deactivated successfully. Aug 12 23:51:15.801143 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:51:15.803460 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:51:15.805037 systemd-logind[1203]: Removed session 15. Aug 12 23:51:15.847205 sshd[3445]: Accepted publickey for core from 10.0.0.1 port 53504 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:15.849006 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:15.855172 systemd[1]: Started session-16.scope. Aug 12 23:51:15.855655 systemd-logind[1203]: New session 16 of user core. Aug 12 23:51:17.367389 sshd[3445]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:17.369727 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:53510.service. Aug 12 23:51:17.372557 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:53504.service: Deactivated successfully. Aug 12 23:51:17.373374 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:51:17.374542 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:51:17.376181 systemd-logind[1203]: Removed session 16. Aug 12 23:51:17.421382 sshd[3464]: Accepted publickey for core from 10.0.0.1 port 53510 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:17.422988 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:17.426776 systemd-logind[1203]: New session 17 of user core. Aug 12 23:51:17.427738 systemd[1]: Started session-17.scope. Aug 12 23:51:17.717305 sshd[3464]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:17.721764 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:53516.service. Aug 12 23:51:17.722238 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:53510.service: Deactivated successfully. Aug 12 23:51:17.723559 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:51:17.724335 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:51:17.725396 systemd-logind[1203]: Removed session 17. Aug 12 23:51:17.765966 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 53516 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:17.767631 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:17.770998 systemd-logind[1203]: New session 18 of user core. Aug 12 23:51:17.771843 systemd[1]: Started session-18.scope. Aug 12 23:51:17.899328 sshd[3476]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:17.901917 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:53516.service: Deactivated successfully. Aug 12 23:51:17.902613 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:51:17.903163 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:51:17.903986 systemd-logind[1203]: Removed session 18. Aug 12 23:51:22.931129 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:56936.service. Aug 12 23:51:22.993256 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 56936 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:22.999107 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:23.011071 systemd-logind[1203]: New session 19 of user core. Aug 12 23:51:23.015636 systemd[1]: Started session-19.scope. Aug 12 23:51:23.207730 sshd[3492]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:23.216692 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:56936.service: Deactivated successfully. Aug 12 23:51:23.218418 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:51:23.219112 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:51:23.220062 systemd-logind[1203]: Removed session 19. Aug 12 23:51:23.687537 kubelet[1918]: E0812 23:51:23.685616 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:27.683851 kubelet[1918]: E0812 23:51:27.683742 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:28.208496 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:56948.service. Aug 12 23:51:28.260754 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 56948 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:28.262161 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:28.266559 systemd-logind[1203]: New session 20 of user core. Aug 12 23:51:28.267561 systemd[1]: Started session-20.scope. Aug 12 23:51:28.399389 sshd[3508]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:28.404307 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:56948.service: Deactivated successfully. Aug 12 23:51:28.405610 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:51:28.406359 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:51:28.407320 systemd-logind[1203]: Removed session 20. Aug 12 23:51:29.683848 kubelet[1918]: E0812 23:51:29.683811 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:33.411794 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:50800.service. Aug 12 23:51:33.461050 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 50800 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:33.462399 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:33.472454 systemd-logind[1203]: New session 21 of user core. Aug 12 23:51:33.473514 systemd[1]: Started session-21.scope. Aug 12 23:51:33.615510 sshd[3521]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:33.621063 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:50800.service: Deactivated successfully. Aug 12 23:51:33.622119 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:51:33.622931 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:51:33.624060 systemd-logind[1203]: Removed session 21. Aug 12 23:51:34.683811 kubelet[1918]: E0812 23:51:34.683769 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:38.622783 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:50802.service. Aug 12 23:51:38.669991 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 50802 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:38.672396 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:38.677377 systemd-logind[1203]: New session 22 of user core. Aug 12 23:51:38.678317 systemd[1]: Started session-22.scope. Aug 12 23:51:38.817709 sshd[3535]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:38.821108 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:50802.service: Deactivated successfully. Aug 12 23:51:38.822080 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:51:38.822961 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:51:38.824653 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:50818.service. Aug 12 23:51:38.826597 systemd-logind[1203]: Removed session 22. Aug 12 23:51:38.875718 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 50818 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:38.878847 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:38.888640 systemd-logind[1203]: New session 23 of user core. Aug 12 23:51:38.889382 systemd[1]: Started session-23.scope. Aug 12 23:51:40.935516 env[1213]: time="2025-08-12T23:51:40.923683160Z" level=info msg="StopContainer for \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\" with timeout 30 (s)" Aug 12 23:51:40.935516 env[1213]: time="2025-08-12T23:51:40.924675682Z" level=info msg="Stop container \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\" with signal terminated" Aug 12 23:51:40.944542 systemd[1]: cri-containerd-fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950.scope: Deactivated successfully. Aug 12 23:51:40.975114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950-rootfs.mount: Deactivated successfully. Aug 12 23:51:40.994477 env[1213]: time="2025-08-12T23:51:40.994396697Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:51:41.001606 env[1213]: time="2025-08-12T23:51:41.001568591Z" level=info msg="StopContainer for \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\" with timeout 2 (s)" Aug 12 23:51:41.002123 env[1213]: time="2025-08-12T23:51:41.002096112Z" level=info msg="Stop container \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\" with signal terminated" Aug 12 23:51:41.004037 env[1213]: time="2025-08-12T23:51:41.003996117Z" level=info msg="shim disconnected" id=fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950 Aug 12 23:51:41.004189 env[1213]: time="2025-08-12T23:51:41.004166957Z" level=warning msg="cleaning up after shim disconnected" id=fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950 namespace=k8s.io Aug 12 23:51:41.004269 env[1213]: time="2025-08-12T23:51:41.004255198Z" level=info msg="cleaning up dead shim" Aug 12 23:51:41.010060 systemd-networkd[1037]: lxc_health: Link DOWN Aug 12 23:51:41.010067 systemd-networkd[1037]: lxc_health: Lost carrier Aug 12 23:51:41.017124 env[1213]: time="2025-08-12T23:51:41.017084789Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3597 runtime=io.containerd.runc.v2\n" Aug 12 23:51:41.033058 systemd[1]: cri-containerd-98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234.scope: Deactivated successfully. Aug 12 23:51:41.033418 systemd[1]: cri-containerd-98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234.scope: Consumed 7.795s CPU time. Aug 12 23:51:41.037994 env[1213]: time="2025-08-12T23:51:41.037787999Z" level=info msg="StopContainer for \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\" returns successfully" Aug 12 23:51:41.039429 env[1213]: time="2025-08-12T23:51:41.039378003Z" level=info msg="StopPodSandbox for \"8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d\"" Aug 12 23:51:41.039642 env[1213]: time="2025-08-12T23:51:41.039613284Z" level=info msg="Container to stop \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:51:41.041912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d-shm.mount: Deactivated successfully. Aug 12 23:51:41.055945 systemd[1]: cri-containerd-8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d.scope: Deactivated successfully. Aug 12 23:51:41.063523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234-rootfs.mount: Deactivated successfully. Aug 12 23:51:41.070811 env[1213]: time="2025-08-12T23:51:41.070764479Z" level=info msg="shim disconnected" id=98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234 Aug 12 23:51:41.071176 env[1213]: time="2025-08-12T23:51:41.071155640Z" level=warning msg="cleaning up after shim disconnected" id=98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234 namespace=k8s.io Aug 12 23:51:41.071263 env[1213]: time="2025-08-12T23:51:41.071245640Z" level=info msg="cleaning up dead shim" Aug 12 23:51:41.080190 env[1213]: time="2025-08-12T23:51:41.080144302Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3643 runtime=io.containerd.runc.v2\n" Aug 12 23:51:41.083451 env[1213]: time="2025-08-12T23:51:41.083381950Z" level=info msg="StopContainer for \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\" returns successfully" Aug 12 23:51:41.084682 env[1213]: time="2025-08-12T23:51:41.084629513Z" level=info msg="StopPodSandbox for \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\"" Aug 12 23:51:41.084909 env[1213]: time="2025-08-12T23:51:41.084878234Z" level=info msg="Container to stop \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:51:41.085560 env[1213]: time="2025-08-12T23:51:41.085533235Z" level=info msg="Container to stop \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:51:41.085715 env[1213]: time="2025-08-12T23:51:41.085694596Z" level=info msg="Container to stop \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:51:41.085839 env[1213]: time="2025-08-12T23:51:41.084744033Z" level=info msg="shim disconnected" id=8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d Aug 12 23:51:41.085945 env[1213]: time="2025-08-12T23:51:41.085928516Z" level=warning msg="cleaning up after shim disconnected" id=8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d namespace=k8s.io Aug 12 23:51:41.086088 env[1213]: time="2025-08-12T23:51:41.086072917Z" level=info msg="cleaning up dead shim" Aug 12 23:51:41.086374 env[1213]: time="2025-08-12T23:51:41.085907356Z" level=info msg="Container to stop \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:51:41.086374 env[1213]: time="2025-08-12T23:51:41.086355957Z" level=info msg="Container to stop \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:51:41.092545 systemd[1]: cri-containerd-b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f.scope: Deactivated successfully. Aug 12 23:51:41.100335 env[1213]: time="2025-08-12T23:51:41.100274671Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3661 runtime=io.containerd.runc.v2\n" Aug 12 23:51:41.100914 env[1213]: time="2025-08-12T23:51:41.100875273Z" level=info msg="TearDown network for sandbox \"8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d\" successfully" Aug 12 23:51:41.100914 env[1213]: time="2025-08-12T23:51:41.100907393Z" level=info msg="StopPodSandbox for \"8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d\" returns successfully" Aug 12 23:51:41.121197 env[1213]: time="2025-08-12T23:51:41.121135362Z" level=info msg="shim disconnected" id=b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f Aug 12 23:51:41.121197 env[1213]: time="2025-08-12T23:51:41.121194122Z" level=warning msg="cleaning up after shim disconnected" id=b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f namespace=k8s.io Aug 12 23:51:41.121197 env[1213]: time="2025-08-12T23:51:41.121205282Z" level=info msg="cleaning up dead shim" Aug 12 23:51:41.131405 env[1213]: time="2025-08-12T23:51:41.131355347Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3693 runtime=io.containerd.runc.v2\n" Aug 12 23:51:41.131729 env[1213]: time="2025-08-12T23:51:41.131705508Z" level=info msg="TearDown network for sandbox \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" successfully" Aug 12 23:51:41.131729 env[1213]: time="2025-08-12T23:51:41.131730508Z" level=info msg="StopPodSandbox for \"b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f\" returns successfully" Aug 12 23:51:41.276825 kubelet[1918]: I0812 23:51:41.275840 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qfpw\" (UniqueName: \"kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-kube-api-access-8qfpw\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.276825 kubelet[1918]: I0812 23:51:41.275897 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-run\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.276825 kubelet[1918]: I0812 23:51:41.275914 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-bpf-maps\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.276825 kubelet[1918]: I0812 23:51:41.275929 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-xtables-lock\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.276825 kubelet[1918]: I0812 23:51:41.275947 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-hostproc\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.276825 kubelet[1918]: I0812 23:51:41.275972 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cni-path\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277415 kubelet[1918]: I0812 23:51:41.275992 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f32eb236-8db0-4193-ac1f-f3237824458e-clustermesh-secrets\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277415 kubelet[1918]: I0812 23:51:41.276009 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-lib-modules\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277415 kubelet[1918]: I0812 23:51:41.276024 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-etc-cni-netd\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277415 kubelet[1918]: I0812 23:51:41.276047 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-cgroup\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277415 kubelet[1918]: I0812 23:51:41.276066 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-hubble-tls\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277415 kubelet[1918]: I0812 23:51:41.276084 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-config-path\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277592 kubelet[1918]: I0812 23:51:41.276109 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-cilium-config-path\") pod \"9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03\" (UID: \"9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03\") " Aug 12 23:51:41.277592 kubelet[1918]: I0812 23:51:41.276126 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-kernel\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277592 kubelet[1918]: I0812 23:51:41.276142 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-net\") pod \"f32eb236-8db0-4193-ac1f-f3237824458e\" (UID: \"f32eb236-8db0-4193-ac1f-f3237824458e\") " Aug 12 23:51:41.277592 kubelet[1918]: I0812 23:51:41.276158 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns74h\" (UniqueName: \"kubernetes.io/projected/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-kube-api-access-ns74h\") pod \"9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03\" (UID: \"9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03\") " Aug 12 23:51:41.280060 kubelet[1918]: I0812 23:51:41.279891 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.280060 kubelet[1918]: I0812 23:51:41.279966 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.280060 kubelet[1918]: I0812 23:51:41.279998 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.282555 kubelet[1918]: I0812 23:51:41.280335 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.282555 kubelet[1918]: I0812 23:51:41.280385 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-hostproc" (OuterVolumeSpecName: "hostproc") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.282555 kubelet[1918]: I0812 23:51:41.280443 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cni-path" (OuterVolumeSpecName: "cni-path") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.282555 kubelet[1918]: I0812 23:51:41.281398 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:51:41.283045 kubelet[1918]: I0812 23:51:41.283007 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.283190 kubelet[1918]: I0812 23:51:41.283161 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.283286 kubelet[1918]: I0812 23:51:41.283272 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.283584 kubelet[1918]: I0812 23:51:41.283559 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:41.283993 kubelet[1918]: I0812 23:51:41.283950 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-kube-api-access-ns74h" (OuterVolumeSpecName: "kube-api-access-ns74h") pod "9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03" (UID: "9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03"). InnerVolumeSpecName "kube-api-access-ns74h". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:51:41.284075 kubelet[1918]: I0812 23:51:41.284054 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-kube-api-access-8qfpw" (OuterVolumeSpecName: "kube-api-access-8qfpw") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "kube-api-access-8qfpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:51:41.284467 kubelet[1918]: I0812 23:51:41.284408 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32eb236-8db0-4193-ac1f-f3237824458e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:51:41.284784 kubelet[1918]: I0812 23:51:41.284760 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f32eb236-8db0-4193-ac1f-f3237824458e" (UID: "f32eb236-8db0-4193-ac1f-f3237824458e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:51:41.285110 kubelet[1918]: I0812 23:51:41.285060 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03" (UID: "9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:51:41.377211 kubelet[1918]: I0812 23:51:41.377141 1918 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377211 kubelet[1918]: I0812 23:51:41.377188 1918 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f32eb236-8db0-4193-ac1f-f3237824458e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377211 kubelet[1918]: I0812 23:51:41.377201 1918 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377211 kubelet[1918]: I0812 23:51:41.377212 1918 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377211 kubelet[1918]: I0812 23:51:41.377220 1918 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377231 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377240 1918 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377249 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377257 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377266 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377273 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377281 1918 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns74h\" (UniqueName: \"kubernetes.io/projected/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03-kube-api-access-ns74h\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377562 kubelet[1918]: I0812 23:51:41.377289 1918 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qfpw\" (UniqueName: \"kubernetes.io/projected/f32eb236-8db0-4193-ac1f-f3237824458e-kube-api-access-8qfpw\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377844 kubelet[1918]: I0812 23:51:41.377296 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377844 kubelet[1918]: I0812 23:51:41.377319 1918 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.377844 kubelet[1918]: I0812 23:51:41.377327 1918 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f32eb236-8db0-4193-ac1f-f3237824458e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:41.695196 systemd[1]: Removed slice kubepods-besteffort-pod9a6f12de_7b5d_4d4f_99d6_9b0a948a3a03.slice. Aug 12 23:51:41.697011 systemd[1]: Removed slice kubepods-burstable-podf32eb236_8db0_4193_ac1f_f3237824458e.slice. Aug 12 23:51:41.697098 systemd[1]: kubepods-burstable-podf32eb236_8db0_4193_ac1f_f3237824458e.slice: Consumed 8.128s CPU time. Aug 12 23:51:41.922184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f397e8b7f86713a3a019a63a1d571abfcaac61caa36d9ddc10aebabf04b706d-rootfs.mount: Deactivated successfully. Aug 12 23:51:41.922298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f-rootfs.mount: Deactivated successfully. Aug 12 23:51:41.922363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b93e0cc24d6f0db586c9c19a801da11de4f85089512c42036bd697d78fbd2c5f-shm.mount: Deactivated successfully. Aug 12 23:51:41.922431 systemd[1]: var-lib-kubelet-pods-9a6f12de\x2d7b5d\x2d4d4f\x2d99d6\x2d9b0a948a3a03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dns74h.mount: Deactivated successfully. Aug 12 23:51:41.922499 systemd[1]: var-lib-kubelet-pods-f32eb236\x2d8db0\x2d4193\x2dac1f\x2df3237824458e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8qfpw.mount: Deactivated successfully. Aug 12 23:51:41.922553 systemd[1]: var-lib-kubelet-pods-f32eb236\x2d8db0\x2d4193\x2dac1f\x2df3237824458e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 12 23:51:41.922612 systemd[1]: var-lib-kubelet-pods-f32eb236\x2d8db0\x2d4193\x2dac1f\x2df3237824458e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 12 23:51:41.928827 kubelet[1918]: I0812 23:51:41.928014 1918 scope.go:117] "RemoveContainer" containerID="98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234" Aug 12 23:51:41.932144 env[1213]: time="2025-08-12T23:51:41.930918492Z" level=info msg="RemoveContainer for \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\"" Aug 12 23:51:42.078617 env[1213]: time="2025-08-12T23:51:42.078489127Z" level=info msg="RemoveContainer for \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\" returns successfully" Aug 12 23:51:42.079039 kubelet[1918]: I0812 23:51:42.078901 1918 scope.go:117] "RemoveContainer" containerID="8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece" Aug 12 23:51:42.080479 env[1213]: time="2025-08-12T23:51:42.080399813Z" level=info msg="RemoveContainer for \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\"" Aug 12 23:51:42.115624 env[1213]: time="2025-08-12T23:51:42.115568675Z" level=info msg="RemoveContainer for \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\" returns successfully" Aug 12 23:51:42.115946 kubelet[1918]: I0812 23:51:42.115920 1918 scope.go:117] "RemoveContainer" containerID="96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f" Aug 12 23:51:42.117073 env[1213]: time="2025-08-12T23:51:42.117028439Z" level=info msg="RemoveContainer for \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\"" Aug 12 23:51:42.126363 env[1213]: time="2025-08-12T23:51:42.126289786Z" level=info msg="RemoveContainer for \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\" returns successfully" Aug 12 23:51:42.127240 kubelet[1918]: I0812 23:51:42.126593 1918 scope.go:117] "RemoveContainer" containerID="073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e" Aug 12 23:51:42.128594 env[1213]: time="2025-08-12T23:51:42.128256912Z" level=info msg="RemoveContainer for \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\"" Aug 12 23:51:42.137128 env[1213]: time="2025-08-12T23:51:42.137081137Z" level=info msg="RemoveContainer for \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\" returns successfully" Aug 12 23:51:42.137605 kubelet[1918]: I0812 23:51:42.137558 1918 scope.go:117] "RemoveContainer" containerID="8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc" Aug 12 23:51:42.138825 env[1213]: time="2025-08-12T23:51:42.138776302Z" level=info msg="RemoveContainer for \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\"" Aug 12 23:51:42.141318 env[1213]: time="2025-08-12T23:51:42.141262230Z" level=info msg="RemoveContainer for \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\" returns successfully" Aug 12 23:51:42.141597 kubelet[1918]: I0812 23:51:42.141563 1918 scope.go:117] "RemoveContainer" containerID="98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234" Aug 12 23:51:42.141913 env[1213]: time="2025-08-12T23:51:42.141821391Z" level=error msg="ContainerStatus for \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\": not found" Aug 12 23:51:42.142064 kubelet[1918]: E0812 23:51:42.142023 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\": not found" containerID="98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234" Aug 12 23:51:42.142150 kubelet[1918]: I0812 23:51:42.142063 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234"} err="failed to get container status \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\": rpc error: code = NotFound desc = an error occurred when try to find container \"98938c15aca7767296dd7025112a6f27e37f763606fe83d0d3c62b483fead234\": not found" Aug 12 23:51:42.142150 kubelet[1918]: I0812 23:51:42.142144 1918 scope.go:117] "RemoveContainer" containerID="8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece" Aug 12 23:51:42.142368 env[1213]: time="2025-08-12T23:51:42.142300313Z" level=error msg="ContainerStatus for \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\": not found" Aug 12 23:51:42.142491 kubelet[1918]: E0812 23:51:42.142457 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\": not found" containerID="8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece" Aug 12 23:51:42.142542 kubelet[1918]: I0812 23:51:42.142488 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece"} err="failed to get container status \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\": rpc error: code = NotFound desc = an error occurred when try to find container \"8384807afd93a918c4164ad7b9b0bb3c9ac8f72d47f37d1d197007174fbacece\": not found" Aug 12 23:51:42.142542 kubelet[1918]: I0812 23:51:42.142514 1918 scope.go:117] "RemoveContainer" containerID="96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f" Aug 12 23:51:42.142820 env[1213]: time="2025-08-12T23:51:42.142722914Z" level=error msg="ContainerStatus for \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\": not found" Aug 12 23:51:42.142888 kubelet[1918]: E0812 23:51:42.142852 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\": not found" containerID="96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f" Aug 12 23:51:42.142888 kubelet[1918]: I0812 23:51:42.142868 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f"} err="failed to get container status \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"96780df6f8f292e37652dcb051eb07dc3421889fcaa633f0ab67c44f64344f8f\": not found" Aug 12 23:51:42.142888 kubelet[1918]: I0812 23:51:42.142879 1918 scope.go:117] "RemoveContainer" containerID="073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e" Aug 12 23:51:42.143064 env[1213]: time="2025-08-12T23:51:42.142999035Z" level=error msg="ContainerStatus for \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\": not found" Aug 12 23:51:42.143113 kubelet[1918]: E0812 23:51:42.143103 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\": not found" containerID="073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e" Aug 12 23:51:42.143149 kubelet[1918]: I0812 23:51:42.143118 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e"} err="failed to get container status \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\": rpc error: code = NotFound desc = an error occurred when try to find container \"073ae3c97121a83f920f6df84870119c0e8d551660a8cc9bbdc408d14644116e\": not found" Aug 12 23:51:42.143149 kubelet[1918]: I0812 23:51:42.143129 1918 scope.go:117] "RemoveContainer" containerID="8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc" Aug 12 23:51:42.143355 env[1213]: time="2025-08-12T23:51:42.143273235Z" level=error msg="ContainerStatus for \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\": not found" Aug 12 23:51:42.143436 kubelet[1918]: E0812 23:51:42.143405 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\": not found" containerID="8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc" Aug 12 23:51:42.143491 kubelet[1918]: I0812 23:51:42.143435 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc"} err="failed to get container status \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d395f5584b305f886668b87e065f2cf4ca830166f4e0abbb7b0094533446bfc\": not found" Aug 12 23:51:42.143491 kubelet[1918]: I0812 23:51:42.143449 1918 scope.go:117] "RemoveContainer" containerID="fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950" Aug 12 23:51:42.144624 env[1213]: time="2025-08-12T23:51:42.144577519Z" level=info msg="RemoveContainer for \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\"" Aug 12 23:51:42.148719 env[1213]: time="2025-08-12T23:51:42.148665491Z" level=info msg="RemoveContainer for \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\" returns successfully" Aug 12 23:51:42.148966 kubelet[1918]: I0812 23:51:42.148927 1918 scope.go:117] "RemoveContainer" containerID="fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950" Aug 12 23:51:42.149250 env[1213]: time="2025-08-12T23:51:42.149179013Z" level=error msg="ContainerStatus for \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\": not found" Aug 12 23:51:42.149379 kubelet[1918]: E0812 23:51:42.149339 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\": not found" containerID="fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950" Aug 12 23:51:42.149457 kubelet[1918]: I0812 23:51:42.149380 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950"} err="failed to get container status \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\": rpc error: code = NotFound desc = an error occurred when try to find container \"fff8fe381559cfae6bf087c3191e8876762a8de30f4cdeccb74fb6ea3fc1b950\": not found" Aug 12 23:51:42.861077 sshd[3548]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:42.866098 systemd[1]: Started sshd@23-10.0.0.8:22-10.0.0.1:54450.service. Aug 12 23:51:42.867977 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:50818.service: Deactivated successfully. Aug 12 23:51:42.869137 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:51:42.869294 systemd[1]: session-23.scope: Consumed 1.304s CPU time. Aug 12 23:51:42.870709 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:51:42.873197 systemd-logind[1203]: Removed session 23. Aug 12 23:51:42.931255 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 54450 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:42.933292 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:42.941513 systemd-logind[1203]: New session 24 of user core. Aug 12 23:51:42.942045 systemd[1]: Started session-24.scope. Aug 12 23:51:43.686098 kubelet[1918]: I0812 23:51:43.686046 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03" path="/var/lib/kubelet/pods/9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03/volumes" Aug 12 23:51:43.686582 kubelet[1918]: I0812 23:51:43.686501 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32eb236-8db0-4193-ac1f-f3237824458e" path="/var/lib/kubelet/pods/f32eb236-8db0-4193-ac1f-f3237824458e/volumes" Aug 12 23:51:43.738522 kubelet[1918]: E0812 23:51:43.738472 1918 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:51:43.843106 sshd[3713]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:43.847954 systemd[1]: Started sshd@24-10.0.0.8:22-10.0.0.1:54454.service. Aug 12 23:51:43.848527 systemd[1]: sshd@23-10.0.0.8:22-10.0.0.1:54450.service: Deactivated successfully. Aug 12 23:51:43.849382 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:51:43.850962 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:51:43.856651 systemd-logind[1203]: Removed session 24. Aug 12 23:51:43.886565 kubelet[1918]: E0812 23:51:43.886520 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f32eb236-8db0-4193-ac1f-f3237824458e" containerName="mount-cgroup" Aug 12 23:51:43.886565 kubelet[1918]: E0812 23:51:43.886557 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03" containerName="cilium-operator" Aug 12 23:51:43.886565 kubelet[1918]: E0812 23:51:43.886566 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f32eb236-8db0-4193-ac1f-f3237824458e" containerName="clean-cilium-state" Aug 12 23:51:43.886565 kubelet[1918]: E0812 23:51:43.886573 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f32eb236-8db0-4193-ac1f-f3237824458e" containerName="apply-sysctl-overwrites" Aug 12 23:51:43.886565 kubelet[1918]: E0812 23:51:43.886580 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f32eb236-8db0-4193-ac1f-f3237824458e" containerName="mount-bpf-fs" Aug 12 23:51:43.886565 kubelet[1918]: E0812 23:51:43.886591 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f32eb236-8db0-4193-ac1f-f3237824458e" containerName="cilium-agent" Aug 12 23:51:43.886565 kubelet[1918]: I0812 23:51:43.886617 1918 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32eb236-8db0-4193-ac1f-f3237824458e" containerName="cilium-agent" Aug 12 23:51:43.886565 kubelet[1918]: I0812 23:51:43.886625 1918 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6f12de-7b5d-4d4f-99d6-9b0a948a3a03" containerName="cilium-operator" Aug 12 23:51:43.895862 systemd[1]: Created slice kubepods-burstable-podbdea2cbe_791f_4362_978c_b656d3c1e90c.slice. Aug 12 23:51:43.904575 kubelet[1918]: I0812 23:51:43.901638 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-run\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.904575 kubelet[1918]: I0812 23:51:43.901681 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cni-path\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.904575 kubelet[1918]: I0812 23:51:43.901703 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-cgroup\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.904575 kubelet[1918]: I0812 23:51:43.901721 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-kernel\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.904575 kubelet[1918]: I0812 23:51:43.901739 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-lib-modules\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.904575 kubelet[1918]: I0812 23:51:43.901756 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-ipsec-secrets\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.904332 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:43.905179 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 54454 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:43.905279 kubelet[1918]: I0812 23:51:43.901774 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-hostproc\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905279 kubelet[1918]: I0812 23:51:43.901791 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-net\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905279 kubelet[1918]: I0812 23:51:43.901807 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-hubble-tls\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905279 kubelet[1918]: I0812 23:51:43.901822 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-bpf-maps\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905279 kubelet[1918]: I0812 23:51:43.901837 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-clustermesh-secrets\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905279 kubelet[1918]: I0812 23:51:43.901853 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-etc-cni-netd\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905490 kubelet[1918]: I0812 23:51:43.901869 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-xtables-lock\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905490 kubelet[1918]: I0812 23:51:43.901887 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-config-path\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.905490 kubelet[1918]: I0812 23:51:43.901904 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjfh9\" (UniqueName: \"kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-kube-api-access-zjfh9\") pod \"cilium-dtn67\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " pod="kube-system/cilium-dtn67" Aug 12 23:51:43.910947 systemd-logind[1203]: New session 25 of user core. Aug 12 23:51:43.912255 systemd[1]: Started session-25.scope. Aug 12 23:51:44.132624 sshd[3725]: pam_unix(sshd:session): session closed for user core Aug 12 23:51:44.137826 systemd[1]: Started sshd@25-10.0.0.8:22-10.0.0.1:54476.service. Aug 12 23:51:44.138643 systemd[1]: sshd@24-10.0.0.8:22-10.0.0.1:54454.service: Deactivated successfully. Aug 12 23:51:44.139528 systemd[1]: session-25.scope: Deactivated successfully. Aug 12 23:51:44.141729 systemd-logind[1203]: Session 25 logged out. Waiting for processes to exit. Aug 12 23:51:44.147854 kubelet[1918]: E0812 23:51:44.147813 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:44.148597 env[1213]: time="2025-08-12T23:51:44.148531101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dtn67,Uid:bdea2cbe-791f-4362-978c-b656d3c1e90c,Namespace:kube-system,Attempt:0,}" Aug 12 23:51:44.151943 systemd-logind[1203]: Removed session 25. Aug 12 23:51:44.178173 env[1213]: time="2025-08-12T23:51:44.178080094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:51:44.178173 env[1213]: time="2025-08-12T23:51:44.178130414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:51:44.178173 env[1213]: time="2025-08-12T23:51:44.178141534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:51:44.178554 env[1213]: time="2025-08-12T23:51:44.178510176Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e pid=3752 runtime=io.containerd.runc.v2 Aug 12 23:51:44.191817 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 54476 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 12 23:51:44.195003 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:51:44.198969 systemd[1]: Started cri-containerd-6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e.scope. Aug 12 23:51:44.209495 systemd-logind[1203]: New session 26 of user core. Aug 12 23:51:44.209873 systemd[1]: Started session-26.scope. Aug 12 23:51:44.244601 env[1213]: time="2025-08-12T23:51:44.243764825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dtn67,Uid:bdea2cbe-791f-4362-978c-b656d3c1e90c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e\"" Aug 12 23:51:44.245723 kubelet[1918]: E0812 23:51:44.245190 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:44.248806 env[1213]: time="2025-08-12T23:51:44.248750924Z" level=info msg="CreateContainer within sandbox \"6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:51:44.260266 env[1213]: time="2025-08-12T23:51:44.260210327Z" level=info msg="CreateContainer within sandbox \"6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\"" Aug 12 23:51:44.261162 env[1213]: time="2025-08-12T23:51:44.261134411Z" level=info msg="StartContainer for \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\"" Aug 12 23:51:44.278174 systemd[1]: Started cri-containerd-f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f.scope. Aug 12 23:51:44.297718 systemd[1]: cri-containerd-f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f.scope: Deactivated successfully. Aug 12 23:51:44.322097 env[1213]: time="2025-08-12T23:51:44.322042763Z" level=info msg="shim disconnected" id=f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f Aug 12 23:51:44.322623 env[1213]: time="2025-08-12T23:51:44.322599725Z" level=warning msg="cleaning up after shim disconnected" id=f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f namespace=k8s.io Aug 12 23:51:44.323576 env[1213]: time="2025-08-12T23:51:44.323550729Z" level=info msg="cleaning up dead shim" Aug 12 23:51:44.332307 env[1213]: time="2025-08-12T23:51:44.332247602Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3819 runtime=io.containerd.runc.v2\ntime=\"2025-08-12T23:51:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 12 23:51:44.332859 env[1213]: time="2025-08-12T23:51:44.332751124Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Aug 12 23:51:44.333527 env[1213]: time="2025-08-12T23:51:44.333481047Z" level=error msg="Failed to pipe stdout of container \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\"" error="reading from a closed fifo" Aug 12 23:51:44.333603 env[1213]: time="2025-08-12T23:51:44.333504527Z" level=error msg="Failed to pipe stderr of container \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\"" error="reading from a closed fifo" Aug 12 23:51:44.336585 env[1213]: time="2025-08-12T23:51:44.336514378Z" level=error msg="StartContainer for \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 12 23:51:44.337269 kubelet[1918]: E0812 23:51:44.336803 1918 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f" Aug 12 23:51:44.337269 kubelet[1918]: E0812 23:51:44.337232 1918 kuberuntime_manager.go:1274] "Unhandled Error" err=< Aug 12 23:51:44.337269 kubelet[1918]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 12 23:51:44.337269 kubelet[1918]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 12 23:51:44.337269 kubelet[1918]: rm /hostbin/cilium-mount Aug 12 23:51:44.337526 kubelet[1918]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjfh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dtn67_kube-system(bdea2cbe-791f-4362-978c-b656d3c1e90c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 12 23:51:44.337526 kubelet[1918]: > logger="UnhandledError" Aug 12 23:51:44.344772 kubelet[1918]: E0812 23:51:44.344728 1918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dtn67" podUID="bdea2cbe-791f-4362-978c-b656d3c1e90c" Aug 12 23:51:44.951871 env[1213]: time="2025-08-12T23:51:44.951537965Z" level=info msg="StopPodSandbox for \"6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e\"" Aug 12 23:51:44.951871 env[1213]: time="2025-08-12T23:51:44.951608565Z" level=info msg="Container to stop \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 12 23:51:44.960457 systemd[1]: cri-containerd-6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e.scope: Deactivated successfully. Aug 12 23:51:44.997704 env[1213]: time="2025-08-12T23:51:44.997655701Z" level=info msg="shim disconnected" id=6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e Aug 12 23:51:44.998568 env[1213]: time="2025-08-12T23:51:44.998483864Z" level=warning msg="cleaning up after shim disconnected" id=6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e namespace=k8s.io Aug 12 23:51:44.998568 env[1213]: time="2025-08-12T23:51:44.998504184Z" level=info msg="cleaning up dead shim" Aug 12 23:51:45.008171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e-shm.mount: Deactivated successfully. Aug 12 23:51:45.010783 env[1213]: time="2025-08-12T23:51:45.010725554Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3849 runtime=io.containerd.runc.v2\n" Aug 12 23:51:45.011077 env[1213]: time="2025-08-12T23:51:45.011038675Z" level=info msg="TearDown network for sandbox \"6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e\" successfully" Aug 12 23:51:45.011077 env[1213]: time="2025-08-12T23:51:45.011068395Z" level=info msg="StopPodSandbox for \"6d59f9b6690468a584867d23737c43b43107a577ef617457ff0d73406988053e\" returns successfully" Aug 12 23:51:45.110451 kubelet[1918]: I0812 23:51:45.110353 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.110885 kubelet[1918]: I0812 23:51:45.110468 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-xtables-lock\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.110885 kubelet[1918]: I0812 23:51:45.110509 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cni-path\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.110885 kubelet[1918]: I0812 23:51:45.110539 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cni-path" (OuterVolumeSpecName: "cni-path") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.110885 kubelet[1918]: I0812 23:51:45.110571 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-hubble-tls\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.110885 kubelet[1918]: I0812 23:51:45.110765 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.111046 kubelet[1918]: I0812 23:51:45.110588 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-net\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111046 kubelet[1918]: I0812 23:51:45.110965 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-cgroup\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111046 kubelet[1918]: I0812 23:51:45.110992 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-hostproc\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111046 kubelet[1918]: I0812 23:51:45.111011 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjfh9\" (UniqueName: \"kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-kube-api-access-zjfh9\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111046 kubelet[1918]: I0812 23:51:45.111030 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-kernel\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111194 kubelet[1918]: I0812 23:51:45.111049 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-ipsec-secrets\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111194 kubelet[1918]: I0812 23:51:45.111085 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-config-path\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111194 kubelet[1918]: I0812 23:51:45.111101 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-etc-cni-netd\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111194 kubelet[1918]: I0812 23:51:45.111119 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-lib-modules\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111194 kubelet[1918]: I0812 23:51:45.111163 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-bpf-maps\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111194 kubelet[1918]: I0812 23:51:45.111183 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-clustermesh-secrets\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111393 kubelet[1918]: I0812 23:51:45.111197 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-run\") pod \"bdea2cbe-791f-4362-978c-b656d3c1e90c\" (UID: \"bdea2cbe-791f-4362-978c-b656d3c1e90c\") " Aug 12 23:51:45.111393 kubelet[1918]: I0812 23:51:45.111246 1918 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.111393 kubelet[1918]: I0812 23:51:45.111257 1918 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.111393 kubelet[1918]: I0812 23:51:45.111266 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.111393 kubelet[1918]: I0812 23:51:45.111308 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.111393 kubelet[1918]: I0812 23:51:45.111328 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.111593 kubelet[1918]: I0812 23:51:45.111362 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.111593 kubelet[1918]: I0812 23:51:45.111394 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.111593 kubelet[1918]: I0812 23:51:45.111410 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.114491 kubelet[1918]: I0812 23:51:45.111772 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.114491 kubelet[1918]: I0812 23:51:45.111731 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-hostproc" (OuterVolumeSpecName: "hostproc") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 12 23:51:45.114491 kubelet[1918]: I0812 23:51:45.113532 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:51:45.116462 kubelet[1918]: I0812 23:51:45.116043 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:51:45.117822 systemd[1]: var-lib-kubelet-pods-bdea2cbe\x2d791f\x2d4362\x2d978c\x2db656d3c1e90c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 12 23:51:45.119935 kubelet[1918]: I0812 23:51:45.119891 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:51:45.120316 systemd[1]: var-lib-kubelet-pods-bdea2cbe\x2d791f\x2d4362\x2d978c\x2db656d3c1e90c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 12 23:51:45.122251 systemd[1]: var-lib-kubelet-pods-bdea2cbe\x2d791f\x2d4362\x2d978c\x2db656d3c1e90c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 12 23:51:45.123047 kubelet[1918]: I0812 23:51:45.122731 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:51:45.125252 kubelet[1918]: I0812 23:51:45.125133 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-kube-api-access-zjfh9" (OuterVolumeSpecName: "kube-api-access-zjfh9") pod "bdea2cbe-791f-4362-978c-b656d3c1e90c" (UID: "bdea2cbe-791f-4362-978c-b656d3c1e90c"). InnerVolumeSpecName "kube-api-access-zjfh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:51:45.128755 systemd[1]: var-lib-kubelet-pods-bdea2cbe\x2d791f\x2d4362\x2d978c\x2db656d3c1e90c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjfh9.mount: Deactivated successfully. Aug 12 23:51:45.212380 kubelet[1918]: I0812 23:51:45.212207 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.212601 kubelet[1918]: I0812 23:51:45.212586 1918 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.212701 kubelet[1918]: I0812 23:51:45.212686 1918 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjfh9\" (UniqueName: \"kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-kube-api-access-zjfh9\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.212797 kubelet[1918]: I0812 23:51:45.212786 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.213297 kubelet[1918]: I0812 23:51:45.213273 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.213993 kubelet[1918]: I0812 23:51:45.213772 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.214163 kubelet[1918]: I0812 23:51:45.214151 1918 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.214267 kubelet[1918]: I0812 23:51:45.214257 1918 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.214344 kubelet[1918]: I0812 23:51:45.214334 1918 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.214417 kubelet[1918]: I0812 23:51:45.214407 1918 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bdea2cbe-791f-4362-978c-b656d3c1e90c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.214514 kubelet[1918]: I0812 23:51:45.214503 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bdea2cbe-791f-4362-978c-b656d3c1e90c-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.214573 kubelet[1918]: I0812 23:51:45.214563 1918 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bdea2cbe-791f-4362-978c-b656d3c1e90c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 12 23:51:45.690243 systemd[1]: Removed slice kubepods-burstable-podbdea2cbe_791f_4362_978c_b656d3c1e90c.slice. Aug 12 23:51:45.900938 kubelet[1918]: I0812 23:51:45.900851 1918 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-12T23:51:45Z","lastTransitionTime":"2025-08-12T23:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 12 23:51:45.956949 kubelet[1918]: I0812 23:51:45.956845 1918 scope.go:117] "RemoveContainer" containerID="f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f" Aug 12 23:51:45.959048 env[1213]: time="2025-08-12T23:51:45.959008582Z" level=info msg="RemoveContainer for \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\"" Aug 12 23:51:45.968609 env[1213]: time="2025-08-12T23:51:45.968278581Z" level=info msg="RemoveContainer for \"f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f\" returns successfully" Aug 12 23:51:46.017507 kubelet[1918]: E0812 23:51:46.017445 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bdea2cbe-791f-4362-978c-b656d3c1e90c" containerName="mount-cgroup" Aug 12 23:51:46.017507 kubelet[1918]: I0812 23:51:46.017513 1918 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdea2cbe-791f-4362-978c-b656d3c1e90c" containerName="mount-cgroup" Aug 12 23:51:46.025270 kubelet[1918]: W0812 23:51:46.025222 1918 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 12 23:51:46.025416 kubelet[1918]: E0812 23:51:46.025299 1918 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 12 23:51:46.025416 kubelet[1918]: W0812 23:51:46.025222 1918 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 12 23:51:46.025416 kubelet[1918]: E0812 23:51:46.025338 1918 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 12 23:51:46.025416 kubelet[1918]: W0812 23:51:46.025374 1918 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 12 23:51:46.025416 kubelet[1918]: E0812 23:51:46.025390 1918 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 12 23:51:46.025760 kubelet[1918]: W0812 23:51:46.025731 1918 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 12 23:51:46.025812 kubelet[1918]: E0812 23:51:46.025774 1918 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 12 23:51:46.028233 systemd[1]: Created slice kubepods-burstable-podc366a8db_7209_43e3_b113_dd0f99b36710.slice. Aug 12 23:51:46.121950 kubelet[1918]: I0812 23:51:46.121864 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-xtables-lock\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.121950 kubelet[1918]: I0812 23:51:46.121920 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-ipsec-secrets\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.121950 kubelet[1918]: I0812 23:51:46.121938 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-cni-path\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.121950 kubelet[1918]: I0812 23:51:46.121958 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-lib-modules\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122453 kubelet[1918]: I0812 23:51:46.121976 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgb2\" (UniqueName: \"kubernetes.io/projected/c366a8db-7209-43e3-b113-dd0f99b36710-kube-api-access-crgb2\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122453 kubelet[1918]: I0812 23:51:46.122002 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-run\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122453 kubelet[1918]: I0812 23:51:46.122021 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-hostproc\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122453 kubelet[1918]: I0812 23:51:46.122038 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-config-path\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122453 kubelet[1918]: I0812 23:51:46.122056 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-host-proc-sys-kernel\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122453 kubelet[1918]: I0812 23:51:46.122074 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-bpf-maps\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122658 kubelet[1918]: I0812 23:51:46.122109 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-etc-cni-netd\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122658 kubelet[1918]: I0812 23:51:46.122127 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c366a8db-7209-43e3-b113-dd0f99b36710-clustermesh-secrets\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122658 kubelet[1918]: I0812 23:51:46.122154 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c366a8db-7209-43e3-b113-dd0f99b36710-hubble-tls\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122658 kubelet[1918]: I0812 23:51:46.122512 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-cgroup\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:46.122658 kubelet[1918]: I0812 23:51:46.122571 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c366a8db-7209-43e3-b113-dd0f99b36710-host-proc-sys-net\") pod \"cilium-bsf4t\" (UID: \"c366a8db-7209-43e3-b113-dd0f99b36710\") " pod="kube-system/cilium-bsf4t" Aug 12 23:51:47.226345 kubelet[1918]: E0812 23:51:47.226246 1918 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Aug 12 23:51:47.226345 kubelet[1918]: E0812 23:51:47.226362 1918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-config-path podName:c366a8db-7209-43e3-b113-dd0f99b36710 nodeName:}" failed. No retries permitted until 2025-08-12 23:51:47.726333049 +0000 UTC m=+94.135626499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-config-path") pod "cilium-bsf4t" (UID: "c366a8db-7209-43e3-b113-dd0f99b36710") : failed to sync configmap cache: timed out waiting for the condition Aug 12 23:51:47.229051 kubelet[1918]: E0812 23:51:47.228719 1918 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Aug 12 23:51:47.229051 kubelet[1918]: E0812 23:51:47.228770 1918 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-bsf4t: failed to sync secret cache: timed out waiting for the condition Aug 12 23:51:47.229051 kubelet[1918]: E0812 23:51:47.228871 1918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c366a8db-7209-43e3-b113-dd0f99b36710-hubble-tls podName:c366a8db-7209-43e3-b113-dd0f99b36710 nodeName:}" failed. No retries permitted until 2025-08-12 23:51:47.728851782 +0000 UTC m=+94.138145232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/c366a8db-7209-43e3-b113-dd0f99b36710-hubble-tls") pod "cilium-bsf4t" (UID: "c366a8db-7209-43e3-b113-dd0f99b36710") : failed to sync secret cache: timed out waiting for the condition Aug 12 23:51:47.229281 kubelet[1918]: E0812 23:51:47.229087 1918 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Aug 12 23:51:47.235397 kubelet[1918]: E0812 23:51:47.235121 1918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-ipsec-secrets podName:c366a8db-7209-43e3-b113-dd0f99b36710 nodeName:}" failed. No retries permitted until 2025-08-12 23:51:47.735094933 +0000 UTC m=+94.144388383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/c366a8db-7209-43e3-b113-dd0f99b36710-cilium-ipsec-secrets") pod "cilium-bsf4t" (UID: "c366a8db-7209-43e3-b113-dd0f99b36710") : failed to sync secret cache: timed out waiting for the condition Aug 12 23:51:47.432188 kubelet[1918]: W0812 23:51:47.432124 1918 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdea2cbe_791f_4362_978c_b656d3c1e90c.slice/cri-containerd-f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f.scope WatchSource:0}: container "f84584eafccb9bc05bd310de0544fe468fb5d5fc13b0f8938f4658a4c1354f3f" in namespace "k8s.io": not found Aug 12 23:51:47.688330 kubelet[1918]: I0812 23:51:47.687940 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdea2cbe-791f-4362-978c-b656d3c1e90c" path="/var/lib/kubelet/pods/bdea2cbe-791f-4362-978c-b656d3c1e90c/volumes" Aug 12 23:51:47.830999 kubelet[1918]: E0812 23:51:47.830954 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:47.831628 env[1213]: time="2025-08-12T23:51:47.831562679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bsf4t,Uid:c366a8db-7209-43e3-b113-dd0f99b36710,Namespace:kube-system,Attempt:0,}" Aug 12 23:51:47.844537 env[1213]: time="2025-08-12T23:51:47.844464904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:51:47.844537 env[1213]: time="2025-08-12T23:51:47.844506464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:51:47.844537 env[1213]: time="2025-08-12T23:51:47.844517104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:51:47.844734 env[1213]: time="2025-08-12T23:51:47.844640065Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43 pid=3876 runtime=io.containerd.runc.v2 Aug 12 23:51:47.865268 systemd[1]: Started cri-containerd-06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43.scope. Aug 12 23:51:47.897945 env[1213]: time="2025-08-12T23:51:47.897902295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bsf4t,Uid:c366a8db-7209-43e3-b113-dd0f99b36710,Namespace:kube-system,Attempt:0,} returns sandbox id \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\"" Aug 12 23:51:47.899118 kubelet[1918]: E0812 23:51:47.898819 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:47.901564 env[1213]: time="2025-08-12T23:51:47.901516313Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 12 23:51:47.916270 env[1213]: time="2025-08-12T23:51:47.916203188Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f8e0c870a9eb2c14e1e62e998bf4f87d5b5eb1f12ad84a2a7de8e126faca87b5\"" Aug 12 23:51:47.916774 env[1213]: time="2025-08-12T23:51:47.916721311Z" level=info msg="StartContainer for \"f8e0c870a9eb2c14e1e62e998bf4f87d5b5eb1f12ad84a2a7de8e126faca87b5\"" Aug 12 23:51:47.935782 systemd[1]: Started cri-containerd-f8e0c870a9eb2c14e1e62e998bf4f87d5b5eb1f12ad84a2a7de8e126faca87b5.scope. Aug 12 23:51:47.986892 env[1213]: time="2025-08-12T23:51:47.986838386Z" level=info msg="StartContainer for \"f8e0c870a9eb2c14e1e62e998bf4f87d5b5eb1f12ad84a2a7de8e126faca87b5\" returns successfully" Aug 12 23:51:47.993530 systemd[1]: cri-containerd-f8e0c870a9eb2c14e1e62e998bf4f87d5b5eb1f12ad84a2a7de8e126faca87b5.scope: Deactivated successfully. Aug 12 23:51:48.019994 env[1213]: time="2025-08-12T23:51:48.019947561Z" level=info msg="shim disconnected" id=f8e0c870a9eb2c14e1e62e998bf4f87d5b5eb1f12ad84a2a7de8e126faca87b5 Aug 12 23:51:48.020339 env[1213]: time="2025-08-12T23:51:48.020316003Z" level=warning msg="cleaning up after shim disconnected" id=f8e0c870a9eb2c14e1e62e998bf4f87d5b5eb1f12ad84a2a7de8e126faca87b5 namespace=k8s.io Aug 12 23:51:48.020454 env[1213]: time="2025-08-12T23:51:48.020437844Z" level=info msg="cleaning up dead shim" Aug 12 23:51:48.028185 env[1213]: time="2025-08-12T23:51:48.028143126Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3959 runtime=io.containerd.runc.v2\n" Aug 12 23:51:48.740062 kubelet[1918]: E0812 23:51:48.739994 1918 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 12 23:51:48.976836 kubelet[1918]: E0812 23:51:48.975605 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:48.978734 env[1213]: time="2025-08-12T23:51:48.978691441Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 12 23:51:49.012671 env[1213]: time="2025-08-12T23:51:49.008750528Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f\"" Aug 12 23:51:49.012671 env[1213]: time="2025-08-12T23:51:49.009477252Z" level=info msg="StartContainer for \"0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f\"" Aug 12 23:51:49.045541 systemd[1]: Started cri-containerd-0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f.scope. Aug 12 23:51:49.093762 systemd[1]: cri-containerd-0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f.scope: Deactivated successfully. Aug 12 23:51:49.100175 env[1213]: time="2025-08-12T23:51:49.100110902Z" level=info msg="StartContainer for \"0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f\" returns successfully" Aug 12 23:51:49.139925 env[1213]: time="2025-08-12T23:51:49.139877055Z" level=info msg="shim disconnected" id=0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f Aug 12 23:51:49.139925 env[1213]: time="2025-08-12T23:51:49.139919615Z" level=warning msg="cleaning up after shim disconnected" id=0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f namespace=k8s.io Aug 12 23:51:49.139925 env[1213]: time="2025-08-12T23:51:49.139928735Z" level=info msg="cleaning up dead shim" Aug 12 23:51:49.148436 env[1213]: time="2025-08-12T23:51:49.148354864Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4019 runtime=io.containerd.runc.v2\n" Aug 12 23:51:49.745971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eda387ba7002b378d882322df59418071bd25f3d3fde12c414c89b52c92366f-rootfs.mount: Deactivated successfully. Aug 12 23:51:49.981306 kubelet[1918]: E0812 23:51:49.981253 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:49.983543 env[1213]: time="2025-08-12T23:51:49.983481826Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 12 23:51:49.998608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356233974.mount: Deactivated successfully. Aug 12 23:51:50.007520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042656490.mount: Deactivated successfully. Aug 12 23:51:50.018242 env[1213]: time="2025-08-12T23:51:50.018177435Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8891f42b6f933a8257b6ac7d5e948a954f1d5abebba6cc99ddb90f67a63f7177\"" Aug 12 23:51:50.019687 env[1213]: time="2025-08-12T23:51:50.019641564Z" level=info msg="StartContainer for \"8891f42b6f933a8257b6ac7d5e948a954f1d5abebba6cc99ddb90f67a63f7177\"" Aug 12 23:51:50.037141 systemd[1]: Started cri-containerd-8891f42b6f933a8257b6ac7d5e948a954f1d5abebba6cc99ddb90f67a63f7177.scope. Aug 12 23:51:50.083304 env[1213]: time="2025-08-12T23:51:50.083248799Z" level=info msg="StartContainer for \"8891f42b6f933a8257b6ac7d5e948a954f1d5abebba6cc99ddb90f67a63f7177\" returns successfully" Aug 12 23:51:50.087777 systemd[1]: cri-containerd-8891f42b6f933a8257b6ac7d5e948a954f1d5abebba6cc99ddb90f67a63f7177.scope: Deactivated successfully. Aug 12 23:51:50.174067 env[1213]: time="2025-08-12T23:51:50.174016284Z" level=info msg="shim disconnected" id=8891f42b6f933a8257b6ac7d5e948a954f1d5abebba6cc99ddb90f67a63f7177 Aug 12 23:51:50.174524 env[1213]: time="2025-08-12T23:51:50.174502447Z" level=warning msg="cleaning up after shim disconnected" id=8891f42b6f933a8257b6ac7d5e948a954f1d5abebba6cc99ddb90f67a63f7177 namespace=k8s.io Aug 12 23:51:50.174601 env[1213]: time="2025-08-12T23:51:50.174586607Z" level=info msg="cleaning up dead shim" Aug 12 23:51:50.183074 env[1213]: time="2025-08-12T23:51:50.183029020Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4075 runtime=io.containerd.runc.v2\n" Aug 12 23:51:50.985029 kubelet[1918]: E0812 23:51:50.984997 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:50.987127 env[1213]: time="2025-08-12T23:51:50.987021736Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 12 23:51:51.007528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482881711.mount: Deactivated successfully. Aug 12 23:51:51.017650 env[1213]: time="2025-08-12T23:51:51.017577212Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a\"" Aug 12 23:51:51.018333 env[1213]: time="2025-08-12T23:51:51.018296056Z" level=info msg="StartContainer for \"65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a\"" Aug 12 23:51:51.038317 systemd[1]: Started cri-containerd-65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a.scope. Aug 12 23:51:51.107696 systemd[1]: cri-containerd-65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a.scope: Deactivated successfully. Aug 12 23:51:51.110642 env[1213]: time="2025-08-12T23:51:51.110562223Z" level=info msg="StartContainer for \"65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a\" returns successfully" Aug 12 23:51:51.159791 env[1213]: time="2025-08-12T23:51:51.159741546Z" level=info msg="shim disconnected" id=65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a Aug 12 23:51:51.159791 env[1213]: time="2025-08-12T23:51:51.159792266Z" level=warning msg="cleaning up after shim disconnected" id=65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a namespace=k8s.io Aug 12 23:51:51.160259 env[1213]: time="2025-08-12T23:51:51.159804066Z" level=info msg="cleaning up dead shim" Aug 12 23:51:51.168151 env[1213]: time="2025-08-12T23:51:51.168107121Z" level=warning msg="cleanup warnings time=\"2025-08-12T23:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4127 runtime=io.containerd.runc.v2\n" Aug 12 23:51:51.743415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65427ff6f6e753bc62c336dee3e907d8e024577190edf1db67a58f5b641bb34a-rootfs.mount: Deactivated successfully. Aug 12 23:51:51.992580 kubelet[1918]: E0812 23:51:51.992536 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:51.996405 env[1213]: time="2025-08-12T23:51:51.996280084Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 12 23:51:52.060294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149078638.mount: Deactivated successfully. Aug 12 23:51:52.071315 env[1213]: time="2025-08-12T23:51:52.071252961Z" level=info msg="CreateContainer within sandbox \"06c55a7d1892b2df490372e6d5ce449acf5dedafe048ab030b9cd44ed672ac43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"878322b3e6c7687cc85025b454cff734ff4db9e155b43338d80af4e1708e0ceb\"" Aug 12 23:51:52.072372 env[1213]: time="2025-08-12T23:51:52.072261928Z" level=info msg="StartContainer for \"878322b3e6c7687cc85025b454cff734ff4db9e155b43338d80af4e1708e0ceb\"" Aug 12 23:51:52.098464 systemd[1]: Started cri-containerd-878322b3e6c7687cc85025b454cff734ff4db9e155b43338d80af4e1708e0ceb.scope. Aug 12 23:51:52.149999 env[1213]: time="2025-08-12T23:51:52.149940785Z" level=info msg="StartContainer for \"878322b3e6c7687cc85025b454cff734ff4db9e155b43338d80af4e1708e0ceb\" returns successfully" Aug 12 23:51:52.492459 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Aug 12 23:51:52.999457 kubelet[1918]: E0812 23:51:52.998661 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:53.032989 kubelet[1918]: I0812 23:51:53.032907 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bsf4t" podStartSLOduration=8.032890064 podStartE2EDuration="8.032890064s" podCreationTimestamp="2025-08-12 23:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:51:53.028625713 +0000 UTC m=+99.437919163" watchObservedRunningTime="2025-08-12 23:51:53.032890064 +0000 UTC m=+99.442183514" Aug 12 23:51:54.001499 kubelet[1918]: E0812 23:51:54.001456 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:55.003205 kubelet[1918]: E0812 23:51:55.003168 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:55.486087 systemd-networkd[1037]: lxc_health: Link UP Aug 12 23:51:55.503450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 12 23:51:55.503759 systemd-networkd[1037]: lxc_health: Gained carrier Aug 12 23:51:55.685216 kubelet[1918]: E0812 23:51:55.685175 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:56.004766 kubelet[1918]: E0812 23:51:56.004728 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:57.007669 kubelet[1918]: E0812 23:51:57.007631 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:51:57.197677 systemd[1]: run-containerd-runc-k8s.io-878322b3e6c7687cc85025b454cff734ff4db9e155b43338d80af4e1708e0ceb-runc.lhkOpW.mount: Deactivated successfully. Aug 12 23:51:57.466574 systemd-networkd[1037]: lxc_health: Gained IPv6LL Aug 12 23:51:58.009290 kubelet[1918]: E0812 23:51:58.009252 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:52:01.541539 systemd[1]: run-containerd-runc-k8s.io-878322b3e6c7687cc85025b454cff734ff4db9e155b43338d80af4e1708e0ceb-runc.z32svr.mount: Deactivated successfully. Aug 12 23:52:01.626670 sshd[3742]: pam_unix(sshd:session): session closed for user core Aug 12 23:52:01.633401 systemd[1]: session-26.scope: Deactivated successfully. Aug 12 23:52:01.634289 systemd-logind[1203]: Session 26 logged out. Waiting for processes to exit. Aug 12 23:52:01.634452 systemd[1]: sshd@25-10.0.0.8:22-10.0.0.1:54476.service: Deactivated successfully. Aug 12 23:52:01.636335 systemd-logind[1203]: Removed session 26. Aug 12 23:52:02.683668 kubelet[1918]: E0812 23:52:02.683632 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"