Sep 9 00:35:21.676371 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:35:21.676393 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Sep 8 23:23:23 -00 2025 Sep 9 00:35:21.676402 kernel: efi: EFI v2.70 by EDK II Sep 9 00:35:21.676408 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 9 00:35:21.676413 kernel: random: crng init done Sep 9 00:35:21.676419 kernel: ACPI: Early table checksum verification disabled Sep 9 00:35:21.676425 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 9 00:35:21.676432 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:35:21.676438 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676443 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676449 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676454 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676460 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676466 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676474 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676480 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676486 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:35:21.676491 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:35:21.676497 kernel: NUMA: Failed to initialise from firmware Sep 9 00:35:21.676503 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:35:21.676508 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 9 00:35:21.676514 kernel: Zone ranges: Sep 9 00:35:21.676520 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:35:21.676527 kernel: DMA32 empty Sep 9 00:35:21.676532 kernel: Normal empty Sep 9 00:35:21.676538 kernel: Movable zone start for each node Sep 9 00:35:21.676543 kernel: Early memory node ranges Sep 9 00:35:21.676549 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 9 00:35:21.676554 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 9 00:35:21.676560 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 9 00:35:21.676566 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 9 00:35:21.676572 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 9 00:35:21.676578 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 9 00:35:21.676583 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 9 00:35:21.676589 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:35:21.676603 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:35:21.676609 kernel: psci: probing for conduit method from ACPI. Sep 9 00:35:21.676615 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:35:21.676621 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:35:21.676627 kernel: psci: Trusted OS migration not required Sep 9 00:35:21.676644 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:35:21.676650 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:35:21.676658 kernel: ACPI: SRAT not present Sep 9 00:35:21.676664 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 9 00:35:21.676680 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 9 00:35:21.676686 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:35:21.676699 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:35:21.676706 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:35:21.676712 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:35:21.676718 kernel: CPU features: detected: Spectre-v4 Sep 9 00:35:21.676724 kernel: CPU features: detected: Spectre-BHB Sep 9 00:35:21.676732 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:35:21.676738 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:35:21.676744 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:35:21.676750 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:35:21.676756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:35:21.676762 kernel: Policy zone: DMA Sep 9 00:35:21.676769 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:35:21.676776 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:35:21.676782 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:35:21.676788 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:35:21.676794 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:35:21.676801 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 9 00:35:21.676808 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:35:21.676814 kernel: trace event string verifier disabled Sep 9 00:35:21.676820 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:35:21.676827 kernel: rcu: RCU event tracing is enabled. Sep 9 00:35:21.676833 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:35:21.676839 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:35:21.676845 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:35:21.676851 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:35:21.676858 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:35:21.676864 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:35:21.676871 kernel: GICv3: 256 SPIs implemented Sep 9 00:35:21.676877 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:35:21.676883 kernel: GICv3: Distributor has no Range Selector support Sep 9 00:35:21.676889 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:35:21.676895 kernel: GICv3: 16 PPIs implemented Sep 9 00:35:21.676901 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:35:21.676907 kernel: ACPI: SRAT not present Sep 9 00:35:21.676913 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:35:21.676919 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:35:21.676925 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:35:21.676932 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 9 00:35:21.676938 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 9 00:35:21.676945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:21.676952 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:35:21.676958 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:35:21.676964 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:35:21.676970 kernel: arm-pv: using stolen time PV Sep 9 00:35:21.676976 kernel: Console: colour dummy device 80x25 Sep 9 00:35:21.676983 kernel: ACPI: Core revision 20210730 Sep 9 00:35:21.676989 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:35:21.676996 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:35:21.677002 kernel: LSM: Security Framework initializing Sep 9 00:35:21.677009 kernel: SELinux: Initializing. Sep 9 00:35:21.677016 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:35:21.677022 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:35:21.677028 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:35:21.677034 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:35:21.677040 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:35:21.677047 kernel: Remapping and enabling EFI services. Sep 9 00:35:21.677053 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:35:21.677059 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:35:21.677066 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:35:21.677073 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 9 00:35:21.677079 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:21.677085 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:35:21.677091 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:35:21.677098 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:35:21.677104 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 9 00:35:21.677110 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:21.677117 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:35:21.677123 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:35:21.677130 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:35:21.677136 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 9 00:35:21.677143 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:35:21.677149 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:35:21.677159 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:35:21.677167 kernel: SMP: Total of 4 processors activated. Sep 9 00:35:21.677174 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:35:21.677180 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:35:21.677187 kernel: CPU features: detected: Common not Private translations Sep 9 00:35:21.677193 kernel: CPU features: detected: CRC32 instructions Sep 9 00:35:21.677200 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:35:21.677206 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:35:21.677214 kernel: CPU features: detected: Privileged Access Never Sep 9 00:35:21.677221 kernel: CPU features: detected: RAS Extension Support Sep 9 00:35:21.677228 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:35:21.677234 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:35:21.677241 kernel: alternatives: patching kernel code Sep 9 00:35:21.677248 kernel: devtmpfs: initialized Sep 9 00:35:21.677254 kernel: KASLR enabled Sep 9 00:35:21.677261 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:35:21.677268 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:35:21.677274 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:35:21.677281 kernel: SMBIOS 3.0.0 present. Sep 9 00:35:21.677287 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 9 00:35:21.677294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:35:21.677301 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:35:21.677308 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:35:21.677315 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:35:21.677322 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:35:21.677328 kernel: audit: type=2000 audit(0.064:1): state=initialized audit_enabled=0 res=1 Sep 9 00:35:21.677335 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:35:21.677341 kernel: cpuidle: using governor menu Sep 9 00:35:21.677348 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:35:21.677355 kernel: ASID allocator initialised with 32768 entries Sep 9 00:35:21.677361 kernel: ACPI: bus type PCI registered Sep 9 00:35:21.677369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:35:21.677375 kernel: Serial: AMBA PL011 UART driver Sep 9 00:35:21.677382 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:35:21.677388 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:35:21.677395 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:35:21.677401 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:35:21.677408 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:35:21.677414 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:35:21.677421 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:35:21.677429 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:35:21.677435 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:35:21.677442 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 9 00:35:21.677448 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 9 00:35:21.677455 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 9 00:35:21.677461 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:35:21.677468 kernel: ACPI: Interpreter enabled Sep 9 00:35:21.677474 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:35:21.677481 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:35:21.677488 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:35:21.677495 kernel: printk: console [ttyAMA0] enabled Sep 9 00:35:21.677501 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:35:21.677614 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:35:21.677708 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:35:21.677775 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:35:21.677839 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:35:21.677905 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:35:21.677914 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:35:21.677921 kernel: PCI host bridge to bus 0000:00 Sep 9 00:35:21.677993 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:35:21.678050 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:35:21.678105 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:35:21.678162 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:35:21.678240 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:35:21.678313 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:35:21.678378 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:35:21.678441 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:35:21.678509 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:35:21.678572 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:35:21.678641 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:35:21.678715 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:35:21.678774 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:35:21.678830 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:35:21.678886 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:35:21.678895 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:35:21.678902 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:35:21.678909 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:35:21.678915 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:35:21.678924 kernel: iommu: Default domain type: Translated Sep 9 00:35:21.678931 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:35:21.678937 kernel: vgaarb: loaded Sep 9 00:35:21.678944 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 00:35:21.678954 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 00:35:21.678961 kernel: PTP clock support registered Sep 9 00:35:21.678968 kernel: Registered efivars operations Sep 9 00:35:21.678975 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:35:21.678982 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:35:21.678990 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:35:21.678997 kernel: pnp: PnP ACPI init Sep 9 00:35:21.681176 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:35:21.681191 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:35:21.681198 kernel: NET: Registered PF_INET protocol family Sep 9 00:35:21.681205 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:35:21.681212 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:35:21.681219 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:35:21.681229 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:35:21.681236 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 9 00:35:21.681243 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:35:21.681249 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:35:21.681256 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:35:21.681262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:35:21.681269 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:35:21.681276 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:35:21.681282 kernel: kvm [1]: HYP mode not available Sep 9 00:35:21.681290 kernel: Initialise system trusted keyrings Sep 9 00:35:21.681297 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:35:21.681303 kernel: Key type asymmetric registered Sep 9 00:35:21.681310 kernel: Asymmetric key parser 'x509' registered Sep 9 00:35:21.681316 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 00:35:21.681323 kernel: io scheduler mq-deadline registered Sep 9 00:35:21.681329 kernel: io scheduler kyber registered Sep 9 00:35:21.681336 kernel: io scheduler bfq registered Sep 9 00:35:21.681342 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:35:21.681350 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:35:21.681357 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:35:21.681431 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:35:21.681441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:35:21.681447 kernel: thunder_xcv, ver 1.0 Sep 9 00:35:21.681454 kernel: thunder_bgx, ver 1.0 Sep 9 00:35:21.681460 kernel: nicpf, ver 1.0 Sep 9 00:35:21.681467 kernel: nicvf, ver 1.0 Sep 9 00:35:21.681541 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:35:21.681604 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:35:21 UTC (1757378121) Sep 9 00:35:21.681613 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:35:21.681620 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:35:21.681626 kernel: Segment Routing with IPv6 Sep 9 00:35:21.681641 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:35:21.681648 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:35:21.681655 kernel: Key type dns_resolver registered Sep 9 00:35:21.681661 kernel: registered taskstats version 1 Sep 9 00:35:21.681670 kernel: Loading compiled-in X.509 certificates Sep 9 00:35:21.681676 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 14b3f28443a1a4b809c7c0337ab8c3dc8fdb5252' Sep 9 00:35:21.681683 kernel: Key type .fscrypt registered Sep 9 00:35:21.681695 kernel: Key type fscrypt-provisioning registered Sep 9 00:35:21.681703 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:35:21.681709 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:35:21.681716 kernel: ima: No architecture policies found Sep 9 00:35:21.681722 kernel: clk: Disabling unused clocks Sep 9 00:35:21.681729 kernel: Freeing unused kernel memory: 36416K Sep 9 00:35:21.681737 kernel: Run /init as init process Sep 9 00:35:21.681744 kernel: with arguments: Sep 9 00:35:21.681750 kernel: /init Sep 9 00:35:21.681757 kernel: with environment: Sep 9 00:35:21.681763 kernel: HOME=/ Sep 9 00:35:21.681769 kernel: TERM=linux Sep 9 00:35:21.681776 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:35:21.681784 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:35:21.681795 systemd[1]: Detected virtualization kvm. Sep 9 00:35:21.681802 systemd[1]: Detected architecture arm64. Sep 9 00:35:21.681809 systemd[1]: Running in initrd. Sep 9 00:35:21.681816 systemd[1]: No hostname configured, using default hostname. Sep 9 00:35:21.681823 systemd[1]: Hostname set to . Sep 9 00:35:21.681831 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:35:21.681838 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:35:21.681845 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:35:21.681853 systemd[1]: Reached target cryptsetup.target. Sep 9 00:35:21.681860 systemd[1]: Reached target paths.target. Sep 9 00:35:21.681867 systemd[1]: Reached target slices.target. Sep 9 00:35:21.681874 systemd[1]: Reached target swap.target. Sep 9 00:35:21.681881 systemd[1]: Reached target timers.target. Sep 9 00:35:21.681888 systemd[1]: Listening on iscsid.socket. Sep 9 00:35:21.681895 systemd[1]: Listening on iscsiuio.socket. Sep 9 00:35:21.681903 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:35:21.681910 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:35:21.681917 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:35:21.681924 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:35:21.681931 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:35:21.681938 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:35:21.681945 systemd[1]: Reached target sockets.target. Sep 9 00:35:21.681952 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:35:21.681959 systemd[1]: Finished network-cleanup.service. Sep 9 00:35:21.681969 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:35:21.681976 systemd[1]: Starting systemd-journald.service... Sep 9 00:35:21.681983 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:35:21.681990 systemd[1]: Starting systemd-resolved.service... Sep 9 00:35:21.681997 systemd[1]: Starting systemd-vconsole-setup.service... Sep 9 00:35:21.682004 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:35:21.682011 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:35:21.682018 kernel: audit: type=1130 audit(1757378121.678:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.682025 systemd[1]: Finished systemd-vconsole-setup.service. Sep 9 00:35:21.682037 systemd-journald[289]: Journal started Sep 9 00:35:21.682076 systemd-journald[289]: Runtime Journal (/run/log/journal/6d18eb510d74416bbc88d81af6d5fa94) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:35:21.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.676661 systemd-modules-load[290]: Inserted module 'overlay' Sep 9 00:35:21.685544 systemd[1]: Started systemd-journald.service. Sep 9 00:35:21.685569 kernel: audit: type=1130 audit(1757378121.681:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.688644 kernel: audit: type=1130 audit(1757378121.685:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.687006 systemd[1]: Starting dracut-cmdline-ask.service... Sep 9 00:35:21.689974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:35:21.696782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:35:21.698132 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:35:21.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.698394 systemd-resolved[291]: Positive Trust Anchors: Sep 9 00:35:21.701473 kernel: audit: type=1130 audit(1757378121.698:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.701490 kernel: Bridge firewalling registered Sep 9 00:35:21.698401 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:35:21.698428 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:35:21.701459 systemd-modules-load[290]: Inserted module 'br_netfilter' Sep 9 00:35:21.703058 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 9 00:35:21.706914 systemd[1]: Started systemd-resolved.service. Sep 9 00:35:21.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.710648 systemd[1]: Reached target nss-lookup.target. Sep 9 00:35:21.714399 kernel: audit: type=1130 audit(1757378121.709:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.714618 systemd[1]: Finished dracut-cmdline-ask.service. Sep 9 00:35:21.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.715997 systemd[1]: Starting dracut-cmdline.service... Sep 9 00:35:21.719094 kernel: audit: type=1130 audit(1757378121.714:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.719663 kernel: SCSI subsystem initialized Sep 9 00:35:21.724730 dracut-cmdline[308]: dracut-dracut-053 Sep 9 00:35:21.727060 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:35:21.731699 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:35:21.731721 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:35:21.731730 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 9 00:35:21.734874 systemd-modules-load[290]: Inserted module 'dm_multipath' Sep 9 00:35:21.735622 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:35:21.737119 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:35:21.740266 kernel: audit: type=1130 audit(1757378121.735:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.746666 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:35:21.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.750660 kernel: audit: type=1130 audit(1757378121.746:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.785668 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:35:21.798665 kernel: iscsi: registered transport (tcp) Sep 9 00:35:21.812764 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:35:21.812784 kernel: QLogic iSCSI HBA Driver Sep 9 00:35:21.846483 systemd[1]: Finished dracut-cmdline.service. Sep 9 00:35:21.849648 kernel: audit: type=1130 audit(1757378121.846:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:21.848058 systemd[1]: Starting dracut-pre-udev.service... Sep 9 00:35:21.889665 kernel: raid6: neonx8 gen() 13407 MB/s Sep 9 00:35:21.907219 kernel: raid6: neonx8 xor() 10515 MB/s Sep 9 00:35:21.925677 kernel: raid6: neonx4 gen() 14125 MB/s Sep 9 00:35:21.944676 kernel: raid6: neonx4 xor() 12136 MB/s Sep 9 00:35:21.963046 kernel: raid6: neonx2 gen() 12906 MB/s Sep 9 00:35:21.979660 kernel: raid6: neonx2 xor() 9426 MB/s Sep 9 00:35:21.996651 kernel: raid6: neonx1 gen() 9840 MB/s Sep 9 00:35:22.013679 kernel: raid6: neonx1 xor() 8331 MB/s Sep 9 00:35:22.030677 kernel: raid6: int64x8 gen() 5935 MB/s Sep 9 00:35:22.047660 kernel: raid6: int64x8 xor() 3437 MB/s Sep 9 00:35:22.064671 kernel: raid6: int64x4 gen() 6968 MB/s Sep 9 00:35:22.081705 kernel: raid6: int64x4 xor() 3668 MB/s Sep 9 00:35:22.098668 kernel: raid6: int64x2 gen() 6010 MB/s Sep 9 00:35:22.115665 kernel: raid6: int64x2 xor() 3266 MB/s Sep 9 00:35:22.132659 kernel: raid6: int64x1 gen() 4977 MB/s Sep 9 00:35:22.150011 kernel: raid6: int64x1 xor() 2602 MB/s Sep 9 00:35:22.150034 kernel: raid6: using algorithm neonx4 gen() 14125 MB/s Sep 9 00:35:22.150044 kernel: raid6: .... xor() 12136 MB/s, rmw enabled Sep 9 00:35:22.150062 kernel: raid6: using neon recovery algorithm Sep 9 00:35:22.161036 kernel: xor: measuring software checksum speed Sep 9 00:35:22.161059 kernel: 8regs : 17209 MB/sec Sep 9 00:35:22.161647 kernel: 32regs : 20707 MB/sec Sep 9 00:35:22.162681 kernel: arm64_neon : 26280 MB/sec Sep 9 00:35:22.162715 kernel: xor: using function: arm64_neon (26280 MB/sec) Sep 9 00:35:22.218657 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 9 00:35:22.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:22.228980 systemd[1]: Finished dracut-pre-udev.service. Sep 9 00:35:22.231000 audit: BPF prog-id=7 op=LOAD Sep 9 00:35:22.231000 audit: BPF prog-id=8 op=LOAD Sep 9 00:35:22.233771 systemd[1]: Starting systemd-udevd.service... Sep 9 00:35:22.246758 systemd-udevd[492]: Using default interface naming scheme 'v252'. Sep 9 00:35:22.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:22.254868 systemd[1]: Started systemd-udevd.service. Sep 9 00:35:22.257783 systemd[1]: Starting dracut-pre-trigger.service... Sep 9 00:35:22.271834 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Sep 9 00:35:22.300792 systemd[1]: Finished dracut-pre-trigger.service. Sep 9 00:35:22.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:22.304814 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:35:22.349110 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:35:22.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:22.372756 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:35:22.375541 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:35:22.375566 kernel: GPT:9289727 != 19775487 Sep 9 00:35:22.375576 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:35:22.375585 kernel: GPT:9289727 != 19775487 Sep 9 00:35:22.375594 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:35:22.375602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:22.394926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 9 00:35:22.399657 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (554) Sep 9 00:35:22.401141 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 9 00:35:22.403806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 9 00:35:22.404540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 9 00:35:22.410451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:35:22.412014 systemd[1]: Starting disk-uuid.service... Sep 9 00:35:22.418132 disk-uuid[562]: Primary Header is updated. Sep 9 00:35:22.418132 disk-uuid[562]: Secondary Entries is updated. Sep 9 00:35:22.418132 disk-uuid[562]: Secondary Header is updated. Sep 9 00:35:22.422657 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:22.424649 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:22.428678 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:23.430654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:35:23.430854 disk-uuid[563]: The operation has completed successfully. Sep 9 00:35:23.459847 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:35:23.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.459940 systemd[1]: Finished disk-uuid.service. Sep 9 00:35:23.461407 systemd[1]: Starting verity-setup.service... Sep 9 00:35:23.486659 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:35:23.534161 systemd[1]: Found device dev-mapper-usr.device. Sep 9 00:35:23.536345 systemd[1]: Mounting sysusr-usr.mount... Sep 9 00:35:23.537067 systemd[1]: Finished verity-setup.service. Sep 9 00:35:23.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.601664 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 9 00:35:23.602012 systemd[1]: Mounted sysusr-usr.mount. Sep 9 00:35:23.602705 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 9 00:35:23.603410 systemd[1]: Starting ignition-setup.service... Sep 9 00:35:23.605595 systemd[1]: Starting parse-ip-for-networkd.service... Sep 9 00:35:23.621121 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:35:23.621171 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:35:23.621181 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:35:23.632310 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:35:23.643029 systemd[1]: Finished ignition-setup.service. Sep 9 00:35:23.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.644755 systemd[1]: Starting ignition-fetch-offline.service... Sep 9 00:35:23.694546 systemd[1]: Finished parse-ip-for-networkd.service. Sep 9 00:35:23.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.695000 audit: BPF prog-id=9 op=LOAD Sep 9 00:35:23.696892 systemd[1]: Starting systemd-networkd.service... Sep 9 00:35:23.719957 systemd-networkd[741]: lo: Link UP Sep 9 00:35:23.719971 systemd-networkd[741]: lo: Gained carrier Sep 9 00:35:23.720733 systemd-networkd[741]: Enumeration completed Sep 9 00:35:23.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.720846 systemd[1]: Started systemd-networkd.service. Sep 9 00:35:23.721232 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:35:23.722959 systemd[1]: Reached target network.target. Sep 9 00:35:23.723084 systemd-networkd[741]: eth0: Link UP Sep 9 00:35:23.723088 systemd-networkd[741]: eth0: Gained carrier Sep 9 00:35:23.727044 systemd[1]: Starting iscsiuio.service... Sep 9 00:35:23.735512 systemd[1]: Started iscsiuio.service. Sep 9 00:35:23.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.737346 systemd[1]: Starting iscsid.service... Sep 9 00:35:23.740494 ignition[669]: Ignition 2.14.0 Sep 9 00:35:23.740501 ignition[669]: Stage: fetch-offline Sep 9 00:35:23.740547 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:23.740556 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:23.742733 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:35:23.740710 ignition[669]: parsed url from cmdline: "" Sep 9 00:35:23.740714 ignition[669]: no config URL provided Sep 9 00:35:23.740719 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:35:23.747365 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:35:23.747365 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 9 00:35:23.747365 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 9 00:35:23.747365 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 9 00:35:23.747365 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:35:23.747365 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 9 00:35:23.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.740727 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:35:23.750524 systemd[1]: Started iscsid.service. Sep 9 00:35:23.740747 ignition[669]: op(1): [started] loading QEMU firmware config module Sep 9 00:35:23.755293 systemd[1]: Starting dracut-initqueue.service... Sep 9 00:35:23.740752 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:35:23.761620 ignition[669]: op(1): [finished] loading QEMU firmware config module Sep 9 00:35:23.767335 systemd[1]: Finished dracut-initqueue.service. Sep 9 00:35:23.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.768199 systemd[1]: Reached target remote-fs-pre.target. Sep 9 00:35:23.769521 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:35:23.771145 systemd[1]: Reached target remote-fs.target. Sep 9 00:35:23.773396 systemd[1]: Starting dracut-pre-mount.service... Sep 9 00:35:23.781381 systemd[1]: Finished dracut-pre-mount.service. Sep 9 00:35:23.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.810521 ignition[669]: parsing config with SHA512: 4994a74bab600c8b5432a49b9858a6da5c056feec4e1ca4fe1feb3590c55ed3170eb588d38c46cadcba0034fa5298119c4192d65791e096f4390ec1151edecff Sep 9 00:35:23.818263 unknown[669]: fetched base config from "system" Sep 9 00:35:23.818279 unknown[669]: fetched user config from "qemu" Sep 9 00:35:23.818872 ignition[669]: fetch-offline: fetch-offline passed Sep 9 00:35:23.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.820111 systemd[1]: Finished ignition-fetch-offline.service. Sep 9 00:35:23.818948 ignition[669]: Ignition finished successfully Sep 9 00:35:23.821561 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:35:23.822344 systemd[1]: Starting ignition-kargs.service... Sep 9 00:35:23.831099 ignition[763]: Ignition 2.14.0 Sep 9 00:35:23.831108 ignition[763]: Stage: kargs Sep 9 00:35:23.831201 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:23.831210 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:23.836192 systemd[1]: Finished ignition-kargs.service. Sep 9 00:35:23.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.832246 ignition[763]: kargs: kargs passed Sep 9 00:35:23.832287 ignition[763]: Ignition finished successfully Sep 9 00:35:23.838307 systemd[1]: Starting ignition-disks.service... Sep 9 00:35:23.845147 ignition[769]: Ignition 2.14.0 Sep 9 00:35:23.845155 ignition[769]: Stage: disks Sep 9 00:35:23.845246 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:23.847398 systemd[1]: Finished ignition-disks.service. Sep 9 00:35:23.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.845256 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:23.848947 systemd[1]: Reached target initrd-root-device.target. Sep 9 00:35:23.846114 ignition[769]: disks: disks passed Sep 9 00:35:23.850186 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:35:23.846159 ignition[769]: Ignition finished successfully Sep 9 00:35:23.851798 systemd[1]: Reached target local-fs.target. Sep 9 00:35:23.853164 systemd[1]: Reached target sysinit.target. Sep 9 00:35:23.854216 systemd[1]: Reached target basic.target. Sep 9 00:35:23.856391 systemd[1]: Starting systemd-fsck-root.service... Sep 9 00:35:23.868121 systemd-fsck[777]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 9 00:35:23.871723 systemd[1]: Finished systemd-fsck-root.service. Sep 9 00:35:23.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.874484 systemd[1]: Mounting sysroot.mount... Sep 9 00:35:23.882654 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 9 00:35:23.882661 systemd[1]: Mounted sysroot.mount. Sep 9 00:35:23.883280 systemd[1]: Reached target initrd-root-fs.target. Sep 9 00:35:23.885315 systemd[1]: Mounting sysroot-usr.mount... Sep 9 00:35:23.886156 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 9 00:35:23.886191 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:35:23.886213 systemd[1]: Reached target ignition-diskful.target. Sep 9 00:35:23.888179 systemd[1]: Mounted sysroot-usr.mount. Sep 9 00:35:23.889773 systemd[1]: Starting initrd-setup-root.service... Sep 9 00:35:23.894022 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:35:23.897719 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:35:23.901902 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:35:23.905744 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:35:23.938560 systemd[1]: Finished initrd-setup-root.service. Sep 9 00:35:23.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.940333 systemd[1]: Starting ignition-mount.service... Sep 9 00:35:23.941586 systemd[1]: Starting sysroot-boot.service... Sep 9 00:35:23.945991 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Sep 9 00:35:23.955252 ignition[830]: INFO : Ignition 2.14.0 Sep 9 00:35:23.955252 ignition[830]: INFO : Stage: mount Sep 9 00:35:23.957407 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:23.957407 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:23.957407 ignition[830]: INFO : mount: mount passed Sep 9 00:35:23.957407 ignition[830]: INFO : Ignition finished successfully Sep 9 00:35:23.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:23.957539 systemd[1]: Finished ignition-mount.service. Sep 9 00:35:23.962185 systemd[1]: Finished sysroot-boot.service. Sep 9 00:35:23.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:24.552779 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 9 00:35:24.559649 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (839) Sep 9 00:35:24.561167 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:35:24.561183 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:35:24.561193 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:35:24.566380 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 9 00:35:24.568345 systemd[1]: Starting ignition-files.service... Sep 9 00:35:24.583775 ignition[859]: INFO : Ignition 2.14.0 Sep 9 00:35:24.583775 ignition[859]: INFO : Stage: files Sep 9 00:35:24.585501 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:24.585501 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:24.585501 ignition[859]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:35:24.590115 ignition[859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:35:24.590115 ignition[859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:35:24.590115 ignition[859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:35:24.596338 ignition[859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:35:24.596338 ignition[859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:35:24.596338 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:35:24.596338 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 00:35:24.594527 unknown[859]: wrote ssh authorized keys file for user: core Sep 9 00:35:24.652238 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:35:24.752085 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:35:24.753863 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:35:24.753863 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 00:35:24.991676 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:35:25.250588 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:35:25.250588 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:35:25.253847 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 00:35:25.431843 systemd-networkd[741]: eth0: Gained IPv6LL Sep 9 00:35:25.677063 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:35:26.161420 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:35:26.161420 ignition[859]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:35:26.165073 ignition[859]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:35:26.191025 ignition[859]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:35:26.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.193772 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 9 00:35:26.193793 kernel: audit: type=1130 audit(1757378126.193:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.193804 ignition[859]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:35:26.193804 ignition[859]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:35:26.193804 ignition[859]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:35:26.193804 ignition[859]: INFO : files: files passed Sep 9 00:35:26.193804 ignition[859]: INFO : Ignition finished successfully Sep 9 00:35:26.192489 systemd[1]: Finished ignition-files.service. Sep 9 00:35:26.208862 kernel: audit: type=1130 audit(1757378126.202:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.208887 kernel: audit: type=1131 audit(1757378126.202:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.194049 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 9 00:35:26.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.197564 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 9 00:35:26.214283 kernel: audit: type=1130 audit(1757378126.208:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.214302 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 9 00:35:26.198479 systemd[1]: Starting ignition-quench.service... Sep 9 00:35:26.216713 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:35:26.202108 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:35:26.202201 systemd[1]: Finished ignition-quench.service. Sep 9 00:35:26.204961 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 9 00:35:26.209629 systemd[1]: Reached target ignition-complete.target. Sep 9 00:35:26.213815 systemd[1]: Starting initrd-parse-etc.service... Sep 9 00:35:26.228483 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:35:26.228590 systemd[1]: Finished initrd-parse-etc.service. Sep 9 00:35:26.234357 kernel: audit: type=1130 audit(1757378126.229:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.234394 kernel: audit: type=1131 audit(1757378126.229:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.230095 systemd[1]: Reached target initrd-fs.target. Sep 9 00:35:26.234918 systemd[1]: Reached target initrd.target. Sep 9 00:35:26.235935 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 9 00:35:26.236811 systemd[1]: Starting dracut-pre-pivot.service... Sep 9 00:35:26.247378 systemd[1]: Finished dracut-pre-pivot.service. Sep 9 00:35:26.251552 kernel: audit: type=1130 audit(1757378126.247:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.248979 systemd[1]: Starting initrd-cleanup.service... Sep 9 00:35:26.260059 systemd[1]: Stopped target nss-lookup.target. Sep 9 00:35:26.260800 systemd[1]: Stopped target remote-cryptsetup.target. Sep 9 00:35:26.261886 systemd[1]: Stopped target timers.target. Sep 9 00:35:26.262920 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:35:26.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.263041 systemd[1]: Stopped dracut-pre-pivot.service. Sep 9 00:35:26.267471 kernel: audit: type=1131 audit(1757378126.263:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.264052 systemd[1]: Stopped target initrd.target. Sep 9 00:35:26.267058 systemd[1]: Stopped target basic.target. Sep 9 00:35:26.268064 systemd[1]: Stopped target ignition-complete.target. Sep 9 00:35:26.269118 systemd[1]: Stopped target ignition-diskful.target. Sep 9 00:35:26.270622 systemd[1]: Stopped target initrd-root-device.target. Sep 9 00:35:26.272875 systemd[1]: Stopped target remote-fs.target. Sep 9 00:35:26.274160 systemd[1]: Stopped target remote-fs-pre.target. Sep 9 00:35:26.275432 systemd[1]: Stopped target sysinit.target. Sep 9 00:35:26.276403 systemd[1]: Stopped target local-fs.target. Sep 9 00:35:26.277403 systemd[1]: Stopped target local-fs-pre.target. Sep 9 00:35:26.278384 systemd[1]: Stopped target swap.target. Sep 9 00:35:26.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.279304 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:35:26.283959 kernel: audit: type=1131 audit(1757378126.279:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.279425 systemd[1]: Stopped dracut-pre-mount.service. Sep 9 00:35:26.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.280554 systemd[1]: Stopped target cryptsetup.target. Sep 9 00:35:26.288015 kernel: audit: type=1131 audit(1757378126.283:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.283373 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:35:26.283480 systemd[1]: Stopped dracut-initqueue.service. Sep 9 00:35:26.284588 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:35:26.284698 systemd[1]: Stopped ignition-fetch-offline.service. Sep 9 00:35:26.287624 systemd[1]: Stopped target paths.target. Sep 9 00:35:26.288523 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:35:26.292706 systemd[1]: Stopped systemd-ask-password-console.path. Sep 9 00:35:26.294421 systemd[1]: Stopped target slices.target. Sep 9 00:35:26.295111 systemd[1]: Stopped target sockets.target. Sep 9 00:35:26.297153 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:35:26.297222 systemd[1]: Closed iscsid.socket. Sep 9 00:35:26.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.298077 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:35:26.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.298174 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 9 00:35:26.300871 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:35:26.300967 systemd[1]: Stopped ignition-files.service. Sep 9 00:35:26.302615 systemd[1]: Stopping ignition-mount.service... Sep 9 00:35:26.303926 systemd[1]: Stopping iscsiuio.service... Sep 9 00:35:26.306202 systemd[1]: Stopping sysroot-boot.service... Sep 9 00:35:26.308577 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:35:26.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.308762 systemd[1]: Stopped systemd-udev-trigger.service. Sep 9 00:35:26.312489 ignition[899]: INFO : Ignition 2.14.0 Sep 9 00:35:26.312489 ignition[899]: INFO : Stage: umount Sep 9 00:35:26.312489 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:35:26.312489 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:35:26.312489 ignition[899]: INFO : umount: umount passed Sep 9 00:35:26.312489 ignition[899]: INFO : Ignition finished successfully Sep 9 00:35:26.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.309793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:35:26.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.309886 systemd[1]: Stopped dracut-pre-trigger.service. Sep 9 00:35:26.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.312496 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 9 00:35:26.312608 systemd[1]: Stopped iscsiuio.service. Sep 9 00:35:26.314105 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:35:26.314208 systemd[1]: Stopped ignition-mount.service. Sep 9 00:35:26.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.315330 systemd[1]: Stopped target network.target. Sep 9 00:35:26.316441 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:35:26.316474 systemd[1]: Closed iscsiuio.socket. Sep 9 00:35:26.319906 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:35:26.319959 systemd[1]: Stopped ignition-disks.service. Sep 9 00:35:26.321000 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:35:26.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.321036 systemd[1]: Stopped ignition-kargs.service. Sep 9 00:35:26.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.322241 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:35:26.322289 systemd[1]: Stopped ignition-setup.service. Sep 9 00:35:26.323161 systemd[1]: Stopping systemd-networkd.service... Sep 9 00:35:26.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.324250 systemd[1]: Stopping systemd-resolved.service... Sep 9 00:35:26.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.326145 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:35:26.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.341000 audit: BPF prog-id=6 op=UNLOAD Sep 9 00:35:26.326716 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:35:26.326802 systemd[1]: Finished initrd-cleanup.service. Sep 9 00:35:26.330169 systemd-networkd[741]: eth0: DHCPv6 lease lost Sep 9 00:35:26.346000 audit: BPF prog-id=9 op=UNLOAD Sep 9 00:35:26.331773 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:35:26.331879 systemd[1]: Stopped systemd-resolved.service. Sep 9 00:35:26.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.333375 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:35:26.333460 systemd[1]: Stopped systemd-networkd.service. Sep 9 00:35:26.334674 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:35:26.334715 systemd[1]: Closed systemd-networkd.socket. Sep 9 00:35:26.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.336450 systemd[1]: Stopping network-cleanup.service... Sep 9 00:35:26.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.337148 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:35:26.337214 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 9 00:35:26.338475 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:35:26.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.338516 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:35:26.340321 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:35:26.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.340364 systemd[1]: Stopped systemd-modules-load.service. Sep 9 00:35:26.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.341319 systemd[1]: Stopping systemd-udevd.service... Sep 9 00:35:26.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.346268 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:35:26.348691 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:35:26.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.348849 systemd[1]: Stopped systemd-udevd.service. Sep 9 00:35:26.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.349807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:35:26.349844 systemd[1]: Closed systemd-udevd-control.socket. Sep 9 00:35:26.352495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:35:26.352541 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 9 00:35:26.353506 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:35:26.353550 systemd[1]: Stopped dracut-pre-udev.service. Sep 9 00:35:26.355600 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:35:26.355729 systemd[1]: Stopped dracut-cmdline.service. Sep 9 00:35:26.356725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:35:26.356763 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 9 00:35:26.360764 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 9 00:35:26.362587 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:35:26.362728 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 9 00:35:26.363717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:35:26.363761 systemd[1]: Stopped kmod-static-nodes.service. Sep 9 00:35:26.366521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:35:26.366569 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 9 00:35:26.368551 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:35:26.369087 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:35:26.369181 systemd[1]: Stopped network-cleanup.service. Sep 9 00:35:26.370260 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:35:26.370341 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 9 00:35:26.388159 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:35:26.388261 systemd[1]: Stopped sysroot-boot.service. Sep 9 00:35:26.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.389574 systemd[1]: Reached target initrd-switch-root.target. Sep 9 00:35:26.390563 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:35:26.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.390616 systemd[1]: Stopped initrd-setup-root.service. Sep 9 00:35:26.392610 systemd[1]: Starting initrd-switch-root.service... Sep 9 00:35:26.399953 systemd[1]: Switching root. Sep 9 00:35:26.416274 iscsid[746]: iscsid shutting down. Sep 9 00:35:26.417001 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Sep 9 00:35:26.417046 systemd-journald[289]: Journal stopped Sep 9 00:35:28.483378 kernel: SELinux: Class mctp_socket not defined in policy. Sep 9 00:35:28.483447 kernel: SELinux: Class anon_inode not defined in policy. Sep 9 00:35:28.483461 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 9 00:35:28.483472 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:35:28.483481 kernel: SELinux: policy capability open_perms=1 Sep 9 00:35:28.483493 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:35:28.483503 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:35:28.483512 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:35:28.483545 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:35:28.483608 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:35:28.483617 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:35:28.483645 systemd[1]: Successfully loaded SELinux policy in 38.062ms. Sep 9 00:35:28.483681 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.313ms. Sep 9 00:35:28.483697 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:35:28.483708 systemd[1]: Detected virtualization kvm. Sep 9 00:35:28.483719 systemd[1]: Detected architecture arm64. Sep 9 00:35:28.483731 systemd[1]: Detected first boot. Sep 9 00:35:28.483741 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:35:28.483753 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 9 00:35:28.483763 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:35:28.483774 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:35:28.483785 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:35:28.483797 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:28.483808 systemd[1]: iscsid.service: Deactivated successfully. Sep 9 00:35:28.483818 systemd[1]: Stopped iscsid.service. Sep 9 00:35:28.483829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:35:28.483840 systemd[1]: Stopped initrd-switch-root.service. Sep 9 00:35:28.483852 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:35:28.483862 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 9 00:35:28.483873 systemd[1]: Created slice system-addon\x2drun.slice. Sep 9 00:35:28.483883 systemd[1]: Created slice system-getty.slice. Sep 9 00:35:28.483894 systemd[1]: Created slice system-modprobe.slice. Sep 9 00:35:28.483905 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 9 00:35:28.483915 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 9 00:35:28.483927 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 9 00:35:28.483938 systemd[1]: Created slice user.slice. Sep 9 00:35:28.483949 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:35:28.483960 systemd[1]: Started systemd-ask-password-wall.path. Sep 9 00:35:28.483970 systemd[1]: Set up automount boot.automount. Sep 9 00:35:28.483980 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 9 00:35:28.483991 systemd[1]: Stopped target initrd-switch-root.target. Sep 9 00:35:28.484001 systemd[1]: Stopped target initrd-fs.target. Sep 9 00:35:28.484013 systemd[1]: Stopped target initrd-root-fs.target. Sep 9 00:35:28.484024 systemd[1]: Reached target integritysetup.target. Sep 9 00:35:28.484035 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:35:28.484045 systemd[1]: Reached target remote-fs.target. Sep 9 00:35:28.484056 systemd[1]: Reached target slices.target. Sep 9 00:35:28.484067 systemd[1]: Reached target swap.target. Sep 9 00:35:28.484083 systemd[1]: Reached target torcx.target. Sep 9 00:35:28.484094 systemd[1]: Reached target veritysetup.target. Sep 9 00:35:28.484105 systemd[1]: Listening on systemd-coredump.socket. Sep 9 00:35:28.484115 systemd[1]: Listening on systemd-initctl.socket. Sep 9 00:35:28.484125 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:35:28.484137 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:35:28.484147 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:35:28.484158 systemd[1]: Listening on systemd-userdbd.socket. Sep 9 00:35:28.484168 systemd[1]: Mounting dev-hugepages.mount... Sep 9 00:35:28.484179 systemd[1]: Mounting dev-mqueue.mount... Sep 9 00:35:28.484190 systemd[1]: Mounting media.mount... Sep 9 00:35:28.484201 systemd[1]: Mounting sys-kernel-debug.mount... Sep 9 00:35:28.484211 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 9 00:35:28.484222 systemd[1]: Mounting tmp.mount... Sep 9 00:35:28.484233 systemd[1]: Starting flatcar-tmpfiles.service... Sep 9 00:35:28.484249 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:35:28.484259 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:35:28.484270 systemd[1]: Starting modprobe@configfs.service... Sep 9 00:35:28.484281 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:35:28.484291 systemd[1]: Starting modprobe@drm.service... Sep 9 00:35:28.484302 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:35:28.484313 systemd[1]: Starting modprobe@fuse.service... Sep 9 00:35:28.484323 systemd[1]: Starting modprobe@loop.service... Sep 9 00:35:28.484335 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:35:28.484345 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:35:28.484355 systemd[1]: Stopped systemd-fsck-root.service. Sep 9 00:35:28.484366 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:35:28.484377 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:35:28.484387 systemd[1]: Stopped systemd-journald.service. Sep 9 00:35:28.484397 kernel: fuse: init (API version 7.34) Sep 9 00:35:28.484407 systemd[1]: Starting systemd-journald.service... Sep 9 00:35:28.484417 kernel: loop: module loaded Sep 9 00:35:28.484428 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:35:28.484439 systemd[1]: Starting systemd-network-generator.service... Sep 9 00:35:28.484449 systemd[1]: Starting systemd-remount-fs.service... Sep 9 00:35:28.484463 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:35:28.484474 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:35:28.484488 systemd[1]: Stopped verity-setup.service. Sep 9 00:35:28.484499 systemd[1]: Mounted dev-hugepages.mount. Sep 9 00:35:28.484510 systemd[1]: Mounted dev-mqueue.mount. Sep 9 00:35:28.484524 systemd-journald[998]: Journal started Sep 9 00:35:28.484567 systemd-journald[998]: Runtime Journal (/run/log/journal/6d18eb510d74416bbc88d81af6d5fa94) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:35:26.485000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:35:26.594000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:35:26.594000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:35:26.594000 audit: BPF prog-id=10 op=LOAD Sep 9 00:35:26.594000 audit: BPF prog-id=10 op=UNLOAD Sep 9 00:35:26.594000 audit: BPF prog-id=11 op=LOAD Sep 9 00:35:26.594000 audit: BPF prog-id=11 op=UNLOAD Sep 9 00:35:26.641000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 9 00:35:26.641000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:35:26.641000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:35:26.642000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 9 00:35:26.642000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:35:26.642000 audit: CWD cwd="/" Sep 9 00:35:26.642000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:35:26.642000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:35:26.642000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:35:28.353000 audit: BPF prog-id=12 op=LOAD Sep 9 00:35:28.353000 audit: BPF prog-id=3 op=UNLOAD Sep 9 00:35:28.353000 audit: BPF prog-id=13 op=LOAD Sep 9 00:35:28.353000 audit: BPF prog-id=14 op=LOAD Sep 9 00:35:28.353000 audit: BPF prog-id=4 op=UNLOAD Sep 9 00:35:28.353000 audit: BPF prog-id=5 op=UNLOAD Sep 9 00:35:28.354000 audit: BPF prog-id=15 op=LOAD Sep 9 00:35:28.354000 audit: BPF prog-id=12 op=UNLOAD Sep 9 00:35:28.354000 audit: BPF prog-id=16 op=LOAD Sep 9 00:35:28.354000 audit: BPF prog-id=17 op=LOAD Sep 9 00:35:28.354000 audit: BPF prog-id=13 op=UNLOAD Sep 9 00:35:28.354000 audit: BPF prog-id=14 op=UNLOAD Sep 9 00:35:28.354000 audit: BPF prog-id=18 op=LOAD Sep 9 00:35:28.354000 audit: BPF prog-id=15 op=UNLOAD Sep 9 00:35:28.355000 audit: BPF prog-id=19 op=LOAD Sep 9 00:35:28.355000 audit: BPF prog-id=20 op=LOAD Sep 9 00:35:28.355000 audit: BPF prog-id=16 op=UNLOAD Sep 9 00:35:28.355000 audit: BPF prog-id=17 op=UNLOAD Sep 9 00:35:28.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.367000 audit: BPF prog-id=18 op=UNLOAD Sep 9 00:35:28.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.461000 audit: BPF prog-id=21 op=LOAD Sep 9 00:35:28.461000 audit: BPF prog-id=22 op=LOAD Sep 9 00:35:28.461000 audit: BPF prog-id=23 op=LOAD Sep 9 00:35:28.461000 audit: BPF prog-id=19 op=UNLOAD Sep 9 00:35:28.461000 audit: BPF prog-id=20 op=UNLOAD Sep 9 00:35:28.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.482000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:35:28.482000 audit[998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff659c430 a2=4000 a3=1 items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:35:28.482000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:35:26.639929 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:35:28.352067 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:35:26.640217 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:35:28.352080 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 9 00:35:26.640237 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:35:28.356385 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:35:26.640270 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 9 00:35:26.640281 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 9 00:35:26.640569 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 9 00:35:26.640583 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 9 00:35:26.640842 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 9 00:35:28.486731 systemd[1]: Started systemd-journald.service. Sep 9 00:35:26.640885 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:35:26.640897 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:35:26.641707 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 9 00:35:28.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:26.641746 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 9 00:35:26.641766 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 9 00:35:26.641781 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 9 00:35:26.641800 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 9 00:35:26.641815 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 9 00:35:28.487266 systemd[1]: Mounted media.mount. Sep 9 00:35:28.085991 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:28Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:35:28.086268 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:28Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:35:28.086492 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:28Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:35:28.086690 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:28Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:35:28.086745 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:28Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 9 00:35:28.086805 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-09-09T00:35:28Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 9 00:35:28.488294 systemd[1]: Mounted sys-kernel-debug.mount. Sep 9 00:35:28.489242 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 9 00:35:28.490226 systemd[1]: Mounted tmp.mount. Sep 9 00:35:28.491222 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:35:28.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.492399 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:35:28.492533 systemd[1]: Finished modprobe@configfs.service. Sep 9 00:35:28.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.493766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:28.494502 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:35:28.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.496082 systemd[1]: Finished flatcar-tmpfiles.service. Sep 9 00:35:28.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.497274 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:35:28.497448 systemd[1]: Finished modprobe@drm.service. Sep 9 00:35:28.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.498695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:28.498818 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:35:28.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.499947 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:35:28.500073 systemd[1]: Finished modprobe@fuse.service. Sep 9 00:35:28.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.501258 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:28.501433 systemd[1]: Finished modprobe@loop.service. Sep 9 00:35:28.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.502815 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:35:28.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.504015 systemd[1]: Finished systemd-network-generator.service. Sep 9 00:35:28.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.505388 systemd[1]: Finished systemd-remount-fs.service. Sep 9 00:35:28.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.506812 systemd[1]: Reached target network-pre.target. Sep 9 00:35:28.508999 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 9 00:35:28.511065 systemd[1]: Mounting sys-kernel-config.mount... Sep 9 00:35:28.511840 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:35:28.514774 systemd[1]: Starting systemd-hwdb-update.service... Sep 9 00:35:28.516947 systemd[1]: Starting systemd-journal-flush.service... Sep 9 00:35:28.517933 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:28.519171 systemd[1]: Starting systemd-random-seed.service... Sep 9 00:35:28.520188 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:35:28.521407 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:35:28.523513 systemd-journald[998]: Time spent on flushing to /var/log/journal/6d18eb510d74416bbc88d81af6d5fa94 is 13.670ms for 1007 entries. Sep 9 00:35:28.523513 systemd-journald[998]: System Journal (/var/log/journal/6d18eb510d74416bbc88d81af6d5fa94) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:35:28.553299 systemd-journald[998]: Received client request to flush runtime journal. Sep 9 00:35:28.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.523998 systemd[1]: Starting systemd-sysusers.service... Sep 9 00:35:28.527583 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 9 00:35:28.528891 systemd[1]: Mounted sys-kernel-config.mount. Sep 9 00:35:28.532866 systemd[1]: Finished systemd-random-seed.service. Sep 9 00:35:28.533934 systemd[1]: Reached target first-boot-complete.target. Sep 9 00:35:28.543054 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:35:28.545342 systemd[1]: Starting systemd-udev-settle.service... Sep 9 00:35:28.551176 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:35:28.554834 systemd[1]: Finished systemd-journal-flush.service. Sep 9 00:35:28.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.556327 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:35:28.559691 systemd[1]: Finished systemd-sysusers.service. Sep 9 00:35:28.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.561949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:35:28.582718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:35:28.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.933605 systemd[1]: Finished systemd-hwdb-update.service. Sep 9 00:35:28.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.935000 audit: BPF prog-id=24 op=LOAD Sep 9 00:35:28.935000 audit: BPF prog-id=25 op=LOAD Sep 9 00:35:28.935000 audit: BPF prog-id=7 op=UNLOAD Sep 9 00:35:28.935000 audit: BPF prog-id=8 op=UNLOAD Sep 9 00:35:28.937491 systemd[1]: Starting systemd-udevd.service... Sep 9 00:35:28.955957 systemd-udevd[1038]: Using default interface naming scheme 'v252'. Sep 9 00:35:28.971166 systemd[1]: Started systemd-udevd.service. Sep 9 00:35:28.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:28.972000 audit: BPF prog-id=26 op=LOAD Sep 9 00:35:28.975319 systemd[1]: Starting systemd-networkd.service... Sep 9 00:35:28.980000 audit: BPF prog-id=27 op=LOAD Sep 9 00:35:28.980000 audit: BPF prog-id=28 op=LOAD Sep 9 00:35:28.980000 audit: BPF prog-id=29 op=LOAD Sep 9 00:35:28.981898 systemd[1]: Starting systemd-userdbd.service... Sep 9 00:35:28.990579 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 9 00:35:29.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.013768 systemd[1]: Started systemd-userdbd.service. Sep 9 00:35:29.045340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:35:29.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.064127 systemd-networkd[1046]: lo: Link UP Sep 9 00:35:29.064138 systemd-networkd[1046]: lo: Gained carrier Sep 9 00:35:29.064505 systemd-networkd[1046]: Enumeration completed Sep 9 00:35:29.064610 systemd[1]: Started systemd-networkd.service. Sep 9 00:35:29.066147 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:35:29.067367 systemd-networkd[1046]: eth0: Link UP Sep 9 00:35:29.067380 systemd-networkd[1046]: eth0: Gained carrier Sep 9 00:35:29.082780 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:35:29.083065 systemd[1]: Finished systemd-udev-settle.service. Sep 9 00:35:29.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.085298 systemd[1]: Starting lvm2-activation-early.service... Sep 9 00:35:29.096609 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:35:29.122607 systemd[1]: Finished lvm2-activation-early.service. Sep 9 00:35:29.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.123703 systemd[1]: Reached target cryptsetup.target. Sep 9 00:35:29.125773 systemd[1]: Starting lvm2-activation.service... Sep 9 00:35:29.129561 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:35:29.156677 systemd[1]: Finished lvm2-activation.service. Sep 9 00:35:29.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.158541 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:35:29.159351 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:35:29.159383 systemd[1]: Reached target local-fs.target. Sep 9 00:35:29.160437 systemd[1]: Reached target machines.target. Sep 9 00:35:29.164544 systemd[1]: Starting ldconfig.service... Sep 9 00:35:29.165693 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.165759 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:35:29.167262 systemd[1]: Starting systemd-boot-update.service... Sep 9 00:35:29.169601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 9 00:35:29.172253 systemd[1]: Starting systemd-machine-id-commit.service... Sep 9 00:35:29.175110 systemd[1]: Starting systemd-sysext.service... Sep 9 00:35:29.176251 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) Sep 9 00:35:29.178061 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 9 00:35:29.190023 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 9 00:35:29.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.193596 systemd[1]: Unmounting usr-share-oem.mount... Sep 9 00:35:29.204090 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 9 00:35:29.204282 systemd[1]: Unmounted usr-share-oem.mount. Sep 9 00:35:29.256329 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:35:29.258947 systemd[1]: Finished systemd-machine-id-commit.service. Sep 9 00:35:29.260682 kernel: loop0: detected capacity change from 0 to 203944 Sep 9 00:35:29.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.277843 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:35:29.280600 systemd-fsck[1082]: fsck.fat 4.2 (2021-01-31) Sep 9 00:35:29.280600 systemd-fsck[1082]: /dev/vda1: 236 files, 117310/258078 clusters Sep 9 00:35:29.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.282513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 9 00:35:29.285726 systemd[1]: Mounting boot.mount... Sep 9 00:35:29.295334 systemd[1]: Mounted boot.mount. Sep 9 00:35:29.301730 kernel: loop1: detected capacity change from 0 to 203944 Sep 9 00:35:29.306009 systemd[1]: Finished systemd-boot-update.service. Sep 9 00:35:29.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.312978 (sd-sysext)[1088]: Using extensions 'kubernetes'. Sep 9 00:35:29.313468 (sd-sysext)[1088]: Merged extensions into '/usr'. Sep 9 00:35:29.332192 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.333770 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:35:29.336320 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:35:29.339614 systemd[1]: Starting modprobe@loop.service... Sep 9 00:35:29.340579 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.340837 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:35:29.341840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:29.342084 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:35:29.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.343692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:29.343957 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:35:29.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.345853 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:29.346053 systemd[1]: Finished modprobe@loop.service. Sep 9 00:35:29.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.347664 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:29.347776 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.398410 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:35:29.402693 systemd[1]: Finished ldconfig.service. Sep 9 00:35:29.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.482980 systemd[1]: Mounting usr-share-oem.mount... Sep 9 00:35:29.489407 systemd[1]: Mounted usr-share-oem.mount. Sep 9 00:35:29.491246 systemd[1]: Finished systemd-sysext.service. Sep 9 00:35:29.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.493418 systemd[1]: Starting ensure-sysext.service... Sep 9 00:35:29.495280 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 9 00:35:29.500233 systemd[1]: Reloading. Sep 9 00:35:29.505983 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 9 00:35:29.507783 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:35:29.510008 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:35:29.544044 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-09T00:35:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:35:29.544075 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-09-09T00:35:29Z" level=info msg="torcx already run" Sep 9 00:35:29.614218 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:35:29.614239 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:35:29.633096 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:29.678000 audit: BPF prog-id=30 op=LOAD Sep 9 00:35:29.678000 audit: BPF prog-id=26 op=UNLOAD Sep 9 00:35:29.678000 audit: BPF prog-id=31 op=LOAD Sep 9 00:35:29.678000 audit: BPF prog-id=21 op=UNLOAD Sep 9 00:35:29.679000 audit: BPF prog-id=32 op=LOAD Sep 9 00:35:29.679000 audit: BPF prog-id=33 op=LOAD Sep 9 00:35:29.679000 audit: BPF prog-id=22 op=UNLOAD Sep 9 00:35:29.679000 audit: BPF prog-id=23 op=UNLOAD Sep 9 00:35:29.682000 audit: BPF prog-id=34 op=LOAD Sep 9 00:35:29.682000 audit: BPF prog-id=27 op=UNLOAD Sep 9 00:35:29.682000 audit: BPF prog-id=35 op=LOAD Sep 9 00:35:29.682000 audit: BPF prog-id=36 op=LOAD Sep 9 00:35:29.682000 audit: BPF prog-id=28 op=UNLOAD Sep 9 00:35:29.682000 audit: BPF prog-id=29 op=UNLOAD Sep 9 00:35:29.683000 audit: BPF prog-id=37 op=LOAD Sep 9 00:35:29.683000 audit: BPF prog-id=38 op=LOAD Sep 9 00:35:29.683000 audit: BPF prog-id=24 op=UNLOAD Sep 9 00:35:29.683000 audit: BPF prog-id=25 op=UNLOAD Sep 9 00:35:29.685897 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 9 00:35:29.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.691068 systemd[1]: Starting audit-rules.service... Sep 9 00:35:29.693165 systemd[1]: Starting clean-ca-certificates.service... Sep 9 00:35:29.695522 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 9 00:35:29.696000 audit: BPF prog-id=39 op=LOAD Sep 9 00:35:29.698436 systemd[1]: Starting systemd-resolved.service... Sep 9 00:35:29.699000 audit: BPF prog-id=40 op=LOAD Sep 9 00:35:29.700968 systemd[1]: Starting systemd-timesyncd.service... Sep 9 00:35:29.703050 systemd[1]: Starting systemd-update-utmp.service... Sep 9 00:35:29.704369 systemd[1]: Finished clean-ca-certificates.service. Sep 9 00:35:29.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.707182 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:35:29.712000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.716244 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.717836 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:35:29.719995 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:35:29.722202 systemd[1]: Starting modprobe@loop.service... Sep 9 00:35:29.722986 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.723169 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:35:29.723327 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:35:29.724530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:29.724741 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:35:29.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.725910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:29.726035 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:35:29.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.727219 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:29.727340 systemd[1]: Finished modprobe@loop.service. Sep 9 00:35:29.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.728597 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 9 00:35:29.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.730269 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:29.730408 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.732164 systemd[1]: Starting systemd-update-done.service... Sep 9 00:35:29.733705 systemd[1]: Finished systemd-update-utmp.service. Sep 9 00:35:29.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.736777 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.738331 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:35:29.740472 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:35:29.742707 systemd[1]: Starting modprobe@loop.service... Sep 9 00:35:29.743399 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.743530 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:35:29.743621 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:35:29.744431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:29.744582 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:35:29.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.746040 systemd[1]: Finished systemd-update-done.service. Sep 9 00:35:29.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.747197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:29.747324 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:35:29.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.748520 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:29.748798 systemd[1]: Finished modprobe@loop.service. Sep 9 00:35:29.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:35:29.752164 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.753842 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:35:29.755836 systemd[1]: Starting modprobe@drm.service... Sep 9 00:35:29.757714 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:35:29.759758 systemd[1]: Starting modprobe@loop.service... Sep 9 00:35:29.759000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 9 00:35:29.759000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8c05a10 a2=420 a3=0 items=0 ppid=1154 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:35:29.759000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 9 00:35:29.760426 augenrules[1181]: No rules Sep 9 00:35:29.760448 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.760596 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:35:29.762143 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 9 00:35:29.763008 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:35:29.764168 systemd[1]: Finished audit-rules.service. Sep 9 00:35:29.765377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:35:29.765513 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:35:29.766676 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:35:29.766825 systemd[1]: Finished modprobe@drm.service. Sep 9 00:35:29.767822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:35:29.767939 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:35:29.769128 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:35:29.769251 systemd[1]: Finished modprobe@loop.service. Sep 9 00:35:29.770404 systemd[1]: Started systemd-timesyncd.service. Sep 9 00:35:29.771521 systemd-timesyncd[1159]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:35:29.771617 systemd-timesyncd[1159]: Initial clock synchronization to Tue 2025-09-09 00:35:30.006151 UTC. Sep 9 00:35:29.772948 systemd[1]: Finished ensure-sysext.service. Sep 9 00:35:29.773925 systemd[1]: Reached target time-set.target. Sep 9 00:35:29.774617 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:35:29.774691 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.775785 systemd-resolved[1158]: Positive Trust Anchors: Sep 9 00:35:29.776040 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:35:29.776113 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:35:29.785188 systemd-resolved[1158]: Defaulting to hostname 'linux'. Sep 9 00:35:29.786870 systemd[1]: Started systemd-resolved.service. Sep 9 00:35:29.787564 systemd[1]: Reached target network.target. Sep 9 00:35:29.788225 systemd[1]: Reached target nss-lookup.target. Sep 9 00:35:29.788908 systemd[1]: Reached target sysinit.target. Sep 9 00:35:29.789537 systemd[1]: Started motdgen.path. Sep 9 00:35:29.790151 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 9 00:35:29.791163 systemd[1]: Started logrotate.timer. Sep 9 00:35:29.791834 systemd[1]: Started mdadm.timer. Sep 9 00:35:29.792335 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 9 00:35:29.793059 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:35:29.793091 systemd[1]: Reached target paths.target. Sep 9 00:35:29.793622 systemd[1]: Reached target timers.target. Sep 9 00:35:29.794527 systemd[1]: Listening on dbus.socket. Sep 9 00:35:29.796302 systemd[1]: Starting docker.socket... Sep 9 00:35:29.799718 systemd[1]: Listening on sshd.socket. Sep 9 00:35:29.800420 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:35:29.800959 systemd[1]: Listening on docker.socket. Sep 9 00:35:29.801628 systemd[1]: Reached target sockets.target. Sep 9 00:35:29.802269 systemd[1]: Reached target basic.target. Sep 9 00:35:29.802935 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.802963 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:35:29.804139 systemd[1]: Starting containerd.service... Sep 9 00:35:29.805932 systemd[1]: Starting dbus.service... Sep 9 00:35:29.807594 systemd[1]: Starting enable-oem-cloudinit.service... Sep 9 00:35:29.809534 systemd[1]: Starting extend-filesystems.service... Sep 9 00:35:29.810490 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 9 00:35:29.811747 systemd[1]: Starting motdgen.service... Sep 9 00:35:29.812213 jq[1196]: false Sep 9 00:35:29.813969 systemd[1]: Starting prepare-helm.service... Sep 9 00:35:29.815701 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 9 00:35:29.817445 systemd[1]: Starting sshd-keygen.service... Sep 9 00:35:29.820371 systemd[1]: Starting systemd-logind.service... Sep 9 00:35:29.821036 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:35:29.821126 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:35:29.821606 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:35:29.822663 systemd[1]: Starting update-engine.service... Sep 9 00:35:29.824475 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 9 00:35:29.826984 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:35:29.827165 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 9 00:35:29.827830 jq[1210]: true Sep 9 00:35:29.828285 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:35:29.828453 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 9 00:35:29.838454 jq[1214]: true Sep 9 00:35:29.844180 tar[1212]: linux-arm64/helm Sep 9 00:35:29.845938 extend-filesystems[1197]: Found loop1 Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda1 Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda2 Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda3 Sep 9 00:35:29.846758 extend-filesystems[1197]: Found usr Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda4 Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda6 Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda7 Sep 9 00:35:29.846758 extend-filesystems[1197]: Found vda9 Sep 9 00:35:29.846758 extend-filesystems[1197]: Checking size of /dev/vda9 Sep 9 00:35:29.846487 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:35:29.846679 systemd[1]: Finished motdgen.service. Sep 9 00:35:29.861369 dbus-daemon[1195]: [system] SELinux support is enabled Sep 9 00:35:29.861565 systemd[1]: Started dbus.service. Sep 9 00:35:29.864087 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:35:29.864122 systemd[1]: Reached target system-config.target. Sep 9 00:35:29.864868 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:35:29.864896 systemd[1]: Reached target user-config.target. Sep 9 00:35:29.871335 extend-filesystems[1197]: Resized partition /dev/vda9 Sep 9 00:35:29.875058 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Sep 9 00:35:29.883267 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:35:29.903956 update_engine[1207]: I0909 00:35:29.903610 1207 main.cc:92] Flatcar Update Engine starting Sep 9 00:35:29.910596 systemd-logind[1206]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:35:29.910875 systemd-logind[1206]: New seat seat0. Sep 9 00:35:29.913180 env[1216]: time="2025-09-09T00:35:29.912976400Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 9 00:35:29.918039 systemd[1]: Started systemd-logind.service. Sep 9 00:35:29.918692 update_engine[1207]: I0909 00:35:29.918560 1207 update_check_scheduler.cc:74] Next update check in 9m24s Sep 9 00:35:29.919184 systemd[1]: Started update-engine.service. Sep 9 00:35:29.920667 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:35:29.922116 systemd[1]: Started locksmithd.service. Sep 9 00:35:29.935703 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:35:29.935703 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:35:29.935703 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:35:29.940257 extend-filesystems[1197]: Resized filesystem in /dev/vda9 Sep 9 00:35:29.937613 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:35:29.942246 bash[1246]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:35:29.937818 systemd[1]: Finished extend-filesystems.service. Sep 9 00:35:29.942247 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 9 00:35:29.954107 env[1216]: time="2025-09-09T00:35:29.954061040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:35:29.954265 env[1216]: time="2025-09-09T00:35:29.954241840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:29.955509 env[1216]: time="2025-09-09T00:35:29.955469480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:29.955543 env[1216]: time="2025-09-09T00:35:29.955508280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:29.955784 env[1216]: time="2025-09-09T00:35:29.955760600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:29.955822 env[1216]: time="2025-09-09T00:35:29.955784320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:29.955822 env[1216]: time="2025-09-09T00:35:29.955799440Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 9 00:35:29.955822 env[1216]: time="2025-09-09T00:35:29.955809640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:29.955920 env[1216]: time="2025-09-09T00:35:29.955900600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:29.956209 env[1216]: time="2025-09-09T00:35:29.956187720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:35:29.956337 env[1216]: time="2025-09-09T00:35:29.956315400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:35:29.956373 env[1216]: time="2025-09-09T00:35:29.956335600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:35:29.956410 env[1216]: time="2025-09-09T00:35:29.956391040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 9 00:35:29.956410 env[1216]: time="2025-09-09T00:35:29.956408360Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:35:29.959678 env[1216]: time="2025-09-09T00:35:29.959627920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:35:29.959741 env[1216]: time="2025-09-09T00:35:29.959683360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:35:29.959741 env[1216]: time="2025-09-09T00:35:29.959697840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:35:29.959741 env[1216]: time="2025-09-09T00:35:29.959729880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.959796 env[1216]: time="2025-09-09T00:35:29.959746280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.959796 env[1216]: time="2025-09-09T00:35:29.959763000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.959796 env[1216]: time="2025-09-09T00:35:29.959776000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.960160 env[1216]: time="2025-09-09T00:35:29.960135920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.960196 env[1216]: time="2025-09-09T00:35:29.960162000Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.960196 env[1216]: time="2025-09-09T00:35:29.960175200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.960196 env[1216]: time="2025-09-09T00:35:29.960187760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.960258 env[1216]: time="2025-09-09T00:35:29.960201600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:35:29.960341 env[1216]: time="2025-09-09T00:35:29.960321120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:35:29.960425 env[1216]: time="2025-09-09T00:35:29.960408000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:35:29.960818 env[1216]: time="2025-09-09T00:35:29.960796720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:35:29.960849 env[1216]: time="2025-09-09T00:35:29.960831560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.960849 env[1216]: time="2025-09-09T00:35:29.960845680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:35:29.960997 env[1216]: time="2025-09-09T00:35:29.960979480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961024 env[1216]: time="2025-09-09T00:35:29.960998520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961024 env[1216]: time="2025-09-09T00:35:29.961010920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961090 env[1216]: time="2025-09-09T00:35:29.961023000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961117 env[1216]: time="2025-09-09T00:35:29.961093120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961117 env[1216]: time="2025-09-09T00:35:29.961106760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961154 env[1216]: time="2025-09-09T00:35:29.961119040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961154 env[1216]: time="2025-09-09T00:35:29.961137360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961197 env[1216]: time="2025-09-09T00:35:29.961151960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:35:29.961299 env[1216]: time="2025-09-09T00:35:29.961278320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961329 env[1216]: time="2025-09-09T00:35:29.961301960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961329 env[1216]: time="2025-09-09T00:35:29.961315160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961373 env[1216]: time="2025-09-09T00:35:29.961327200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:35:29.961373 env[1216]: time="2025-09-09T00:35:29.961342000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 9 00:35:29.961373 env[1216]: time="2025-09-09T00:35:29.961352720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:35:29.961430 env[1216]: time="2025-09-09T00:35:29.961372040Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 9 00:35:29.961430 env[1216]: time="2025-09-09T00:35:29.961408560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:35:29.961674 env[1216]: time="2025-09-09T00:35:29.961597240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:35:29.962250 env[1216]: time="2025-09-09T00:35:29.961684720Z" level=info msg="Connect containerd service" Sep 9 00:35:29.962250 env[1216]: time="2025-09-09T00:35:29.961716760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:35:29.962453 env[1216]: time="2025-09-09T00:35:29.962422080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:35:29.962716 env[1216]: time="2025-09-09T00:35:29.962667360Z" level=info msg="Start subscribing containerd event" Sep 9 00:35:29.962786 env[1216]: time="2025-09-09T00:35:29.962736400Z" level=info msg="Start recovering state" Sep 9 00:35:29.962861 env[1216]: time="2025-09-09T00:35:29.962840080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:35:29.962904 env[1216]: time="2025-09-09T00:35:29.962849880Z" level=info msg="Start event monitor" Sep 9 00:35:29.962904 env[1216]: time="2025-09-09T00:35:29.962885880Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:35:29.962947 env[1216]: time="2025-09-09T00:35:29.962887240Z" level=info msg="Start snapshots syncer" Sep 9 00:35:29.962947 env[1216]: time="2025-09-09T00:35:29.962935800Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:35:29.962947 env[1216]: time="2025-09-09T00:35:29.962944280Z" level=info msg="Start streaming server" Sep 9 00:35:29.963015 systemd[1]: Started containerd.service. Sep 9 00:35:29.964188 env[1216]: time="2025-09-09T00:35:29.964162360Z" level=info msg="containerd successfully booted in 0.066969s" Sep 9 00:35:29.987415 locksmithd[1250]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:35:30.245403 tar[1212]: linux-arm64/LICENSE Sep 9 00:35:30.245506 tar[1212]: linux-arm64/README.md Sep 9 00:35:30.249798 systemd[1]: Finished prepare-helm.service. Sep 9 00:35:30.715624 sshd_keygen[1219]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:35:30.735620 systemd[1]: Finished sshd-keygen.service. Sep 9 00:35:30.738909 systemd[1]: Starting issuegen.service... Sep 9 00:35:30.743981 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:35:30.744173 systemd[1]: Finished issuegen.service. Sep 9 00:35:30.746620 systemd[1]: Starting systemd-user-sessions.service... Sep 9 00:35:30.754722 systemd[1]: Finished systemd-user-sessions.service. Sep 9 00:35:30.757491 systemd[1]: Started getty@tty1.service. Sep 9 00:35:30.761845 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 9 00:35:30.764180 systemd[1]: Reached target getty.target. Sep 9 00:35:31.063888 systemd-networkd[1046]: eth0: Gained IPv6LL Sep 9 00:35:31.066049 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 9 00:35:31.067380 systemd[1]: Reached target network-online.target. Sep 9 00:35:31.070127 systemd[1]: Starting kubelet.service... Sep 9 00:35:31.811553 systemd[1]: Started kubelet.service. Sep 9 00:35:31.814737 systemd[1]: Reached target multi-user.target. Sep 9 00:35:31.821803 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 9 00:35:31.830540 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 9 00:35:31.830922 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 9 00:35:31.832522 systemd[1]: Startup finished in 568ms (kernel) + 4.893s (initrd) + 5.387s (userspace) = 10.848s. Sep 9 00:35:32.259882 kubelet[1277]: E0909 00:35:32.259837 1277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:35:32.262682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:35:32.262811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:35:34.764310 systemd[1]: Created slice system-sshd.slice. Sep 9 00:35:34.765448 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:56582.service. Sep 9 00:35:34.809996 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 56582 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:35:34.812524 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:35:34.821233 systemd[1]: Created slice user-500.slice. Sep 9 00:35:34.822409 systemd[1]: Starting user-runtime-dir@500.service... Sep 9 00:35:34.824680 systemd-logind[1206]: New session 1 of user core. Sep 9 00:35:34.831027 systemd[1]: Finished user-runtime-dir@500.service. Sep 9 00:35:34.832420 systemd[1]: Starting user@500.service... Sep 9 00:35:34.836339 (systemd)[1289]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:35:34.901995 systemd[1289]: Queued start job for default target default.target. Sep 9 00:35:34.903116 systemd[1289]: Reached target paths.target. Sep 9 00:35:34.903156 systemd[1289]: Reached target sockets.target. Sep 9 00:35:34.903167 systemd[1289]: Reached target timers.target. Sep 9 00:35:34.903177 systemd[1289]: Reached target basic.target. Sep 9 00:35:34.903218 systemd[1289]: Reached target default.target. Sep 9 00:35:34.903242 systemd[1289]: Startup finished in 60ms. Sep 9 00:35:34.903364 systemd[1]: Started user@500.service. Sep 9 00:35:34.904354 systemd[1]: Started session-1.scope. Sep 9 00:35:34.956411 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:56596.service. Sep 9 00:35:34.992356 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 56596 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:35:34.993601 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:35:34.997532 systemd-logind[1206]: New session 2 of user core. Sep 9 00:35:34.998745 systemd[1]: Started session-2.scope. Sep 9 00:35:35.054189 sshd[1298]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:35.056853 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:56596.service: Deactivated successfully. Sep 9 00:35:35.057442 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:35:35.057970 systemd-logind[1206]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:35:35.059051 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:56610.service. Sep 9 00:35:35.059682 systemd-logind[1206]: Removed session 2. Sep 9 00:35:35.092468 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 56610 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:35:35.093690 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:35:35.097130 systemd-logind[1206]: New session 3 of user core. Sep 9 00:35:35.097988 systemd[1]: Started session-3.scope. Sep 9 00:35:35.148367 sshd[1304]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:35.152392 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:56610.service: Deactivated successfully. Sep 9 00:35:35.152957 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:35:35.153410 systemd-logind[1206]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:35:35.154488 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:56614.service. Sep 9 00:35:35.155147 systemd-logind[1206]: Removed session 3. Sep 9 00:35:35.186331 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 56614 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:35:35.187739 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:35:35.190929 systemd-logind[1206]: New session 4 of user core. Sep 9 00:35:35.191753 systemd[1]: Started session-4.scope. Sep 9 00:35:35.249941 sshd[1310]: pam_unix(sshd:session): session closed for user core Sep 9 00:35:35.253553 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:56614.service: Deactivated successfully. Sep 9 00:35:35.254165 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:35:35.254625 systemd-logind[1206]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:35:35.255737 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:56628.service. Sep 9 00:35:35.256475 systemd-logind[1206]: Removed session 4. Sep 9 00:35:35.292070 sshd[1316]: Accepted publickey for core from 10.0.0.1 port 56628 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:35:35.293727 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:35:35.298192 systemd[1]: Started session-5.scope. Sep 9 00:35:35.299972 systemd-logind[1206]: New session 5 of user core. Sep 9 00:35:35.363353 sudo[1319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:35:35.363586 sudo[1319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:35:35.406052 systemd[1]: Starting docker.service... Sep 9 00:35:35.462768 env[1331]: time="2025-09-09T00:35:35.462712368Z" level=info msg="Starting up" Sep 9 00:35:35.464269 env[1331]: time="2025-09-09T00:35:35.464246477Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:35:35.464269 env[1331]: time="2025-09-09T00:35:35.464268239Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:35:35.464334 env[1331]: time="2025-09-09T00:35:35.464292641Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:35:35.464334 env[1331]: time="2025-09-09T00:35:35.464303401Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:35:35.466531 env[1331]: time="2025-09-09T00:35:35.466507362Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:35:35.466531 env[1331]: time="2025-09-09T00:35:35.466527947Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:35:35.466612 env[1331]: time="2025-09-09T00:35:35.466543416Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:35:35.466612 env[1331]: time="2025-09-09T00:35:35.466552836Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:35:35.471027 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1265380209-merged.mount: Deactivated successfully. Sep 9 00:35:35.646601 env[1331]: time="2025-09-09T00:35:35.646501723Z" level=info msg="Loading containers: start." Sep 9 00:35:35.779767 kernel: Initializing XFRM netlink socket Sep 9 00:35:35.808390 env[1331]: time="2025-09-09T00:35:35.808328694Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 9 00:35:35.871753 systemd-networkd[1046]: docker0: Link UP Sep 9 00:35:35.893034 env[1331]: time="2025-09-09T00:35:35.892982482Z" level=info msg="Loading containers: done." Sep 9 00:35:35.912601 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck210932445-merged.mount: Deactivated successfully. Sep 9 00:35:35.921075 env[1331]: time="2025-09-09T00:35:35.921035052Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:35:35.921447 env[1331]: time="2025-09-09T00:35:35.921427065Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 9 00:35:35.921650 env[1331]: time="2025-09-09T00:35:35.921617570Z" level=info msg="Daemon has completed initialization" Sep 9 00:35:35.938681 systemd[1]: Started docker.service. Sep 9 00:35:35.949303 env[1331]: time="2025-09-09T00:35:35.949188763Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:35:36.575972 env[1216]: time="2025-09-09T00:35:36.575932551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:35:37.190455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300680239.mount: Deactivated successfully. Sep 9 00:35:38.499304 env[1216]: time="2025-09-09T00:35:38.499223167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:38.502062 env[1216]: time="2025-09-09T00:35:38.502013866Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:38.504196 env[1216]: time="2025-09-09T00:35:38.504159929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:38.506012 env[1216]: time="2025-09-09T00:35:38.505974847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:38.507764 env[1216]: time="2025-09-09T00:35:38.507722574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 00:35:38.509449 env[1216]: time="2025-09-09T00:35:38.509420443Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:35:40.118146 env[1216]: time="2025-09-09T00:35:40.118097697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:40.120803 env[1216]: time="2025-09-09T00:35:40.120761746Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:40.125117 env[1216]: time="2025-09-09T00:35:40.125057576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:40.128092 env[1216]: time="2025-09-09T00:35:40.128053286Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:40.128660 env[1216]: time="2025-09-09T00:35:40.128606806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 00:35:40.131846 env[1216]: time="2025-09-09T00:35:40.131810509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:35:41.413425 env[1216]: time="2025-09-09T00:35:41.413346117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:41.416577 env[1216]: time="2025-09-09T00:35:41.416367389Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:41.418726 env[1216]: time="2025-09-09T00:35:41.418697583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:41.421195 env[1216]: time="2025-09-09T00:35:41.421158294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:41.422647 env[1216]: time="2025-09-09T00:35:41.422397690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 00:35:41.423072 env[1216]: time="2025-09-09T00:35:41.423047490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:35:42.319968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:35:42.320144 systemd[1]: Stopped kubelet.service. Sep 9 00:35:42.321548 systemd[1]: Starting kubelet.service... Sep 9 00:35:42.440977 systemd[1]: Started kubelet.service. Sep 9 00:35:42.480338 kubelet[1466]: E0909 00:35:42.480284 1466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:35:42.482621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:35:42.482769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:35:42.544990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079998871.mount: Deactivated successfully. Sep 9 00:35:43.250732 env[1216]: time="2025-09-09T00:35:43.250682149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:43.252167 env[1216]: time="2025-09-09T00:35:43.252122801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:43.254180 env[1216]: time="2025-09-09T00:35:43.254009147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:43.255753 env[1216]: time="2025-09-09T00:35:43.255669851Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:43.256127 env[1216]: time="2025-09-09T00:35:43.256097693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 00:35:43.256773 env[1216]: time="2025-09-09T00:35:43.256746672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:35:44.053883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519663174.mount: Deactivated successfully. Sep 9 00:35:45.209000 env[1216]: time="2025-09-09T00:35:45.208224900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.215063 env[1216]: time="2025-09-09T00:35:45.214443386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.216702 env[1216]: time="2025-09-09T00:35:45.216662418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.218867 env[1216]: time="2025-09-09T00:35:45.218826433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.220006 env[1216]: time="2025-09-09T00:35:45.219866576Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:35:45.221727 env[1216]: time="2025-09-09T00:35:45.221697597Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:35:45.690046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679967270.mount: Deactivated successfully. Sep 9 00:35:45.698048 env[1216]: time="2025-09-09T00:35:45.697995098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.700042 env[1216]: time="2025-09-09T00:35:45.699973541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.702435 env[1216]: time="2025-09-09T00:35:45.702399268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.704544 env[1216]: time="2025-09-09T00:35:45.704511479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:45.705239 env[1216]: time="2025-09-09T00:35:45.705204612Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:35:45.706032 env[1216]: time="2025-09-09T00:35:45.705993283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:35:46.135476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185501780.mount: Deactivated successfully. Sep 9 00:35:48.188371 env[1216]: time="2025-09-09T00:35:48.188303575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:48.194465 env[1216]: time="2025-09-09T00:35:48.194386348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:48.197843 env[1216]: time="2025-09-09T00:35:48.197504678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:48.200597 env[1216]: time="2025-09-09T00:35:48.200556312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:48.201762 env[1216]: time="2025-09-09T00:35:48.201725124Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 00:35:52.551925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:35:52.552119 systemd[1]: Stopped kubelet.service. Sep 9 00:35:52.553459 systemd[1]: Starting kubelet.service... Sep 9 00:35:52.562214 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:35:52.562282 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:35:52.562501 systemd[1]: Stopped kubelet.service. Sep 9 00:35:52.564524 systemd[1]: Starting kubelet.service... Sep 9 00:35:52.593600 systemd[1]: Reloading. Sep 9 00:35:52.646893 /usr/lib/systemd/system-generators/torcx-generator[1524]: time="2025-09-09T00:35:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:35:52.646929 /usr/lib/systemd/system-generators/torcx-generator[1524]: time="2025-09-09T00:35:52Z" level=info msg="torcx already run" Sep 9 00:35:52.743606 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:35:52.743625 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:35:52.759087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:52.822676 systemd[1]: Started kubelet.service. Sep 9 00:35:52.825810 systemd[1]: Stopping kubelet.service... Sep 9 00:35:52.826702 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:35:52.826994 systemd[1]: Stopped kubelet.service. Sep 9 00:35:52.828689 systemd[1]: Starting kubelet.service... Sep 9 00:35:52.921954 systemd[1]: Started kubelet.service. Sep 9 00:35:52.970888 kubelet[1573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:35:52.970888 kubelet[1573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:35:52.970888 kubelet[1573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:35:52.970888 kubelet[1573]: I0909 00:35:52.970778 1573 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:35:53.923597 kubelet[1573]: I0909 00:35:53.923506 1573 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:35:53.923597 kubelet[1573]: I0909 00:35:53.923538 1573 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:35:53.923851 kubelet[1573]: I0909 00:35:53.923808 1573 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:35:53.956378 kubelet[1573]: E0909 00:35:53.956324 1573 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:35:53.957362 kubelet[1573]: I0909 00:35:53.957341 1573 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:35:53.964755 kubelet[1573]: E0909 00:35:53.964727 1573 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:35:53.964919 kubelet[1573]: I0909 00:35:53.964907 1573 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:35:53.970143 kubelet[1573]: I0909 00:35:53.970113 1573 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:35:53.971846 kubelet[1573]: I0909 00:35:53.971816 1573 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:35:53.972314 kubelet[1573]: I0909 00:35:53.972288 1573 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:35:53.972553 kubelet[1573]: I0909 00:35:53.972380 1573 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:35:53.972767 kubelet[1573]: I0909 00:35:53.972755 1573 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:35:53.972835 kubelet[1573]: I0909 00:35:53.972826 1573 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:35:53.973107 kubelet[1573]: I0909 00:35:53.973094 1573 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:35:53.977238 kubelet[1573]: W0909 00:35:53.977192 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Sep 9 00:35:53.977351 kubelet[1573]: E0909 00:35:53.977332 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:35:53.977736 kubelet[1573]: I0909 00:35:53.977720 1573 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:35:53.977817 kubelet[1573]: I0909 00:35:53.977803 1573 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:35:53.977900 kubelet[1573]: I0909 00:35:53.977891 1573 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:35:53.978595 kubelet[1573]: I0909 00:35:53.978567 1573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:35:53.979210 kubelet[1573]: W0909 00:35:53.979154 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Sep 9 00:35:53.979271 kubelet[1573]: E0909 00:35:53.979214 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:35:53.982429 kubelet[1573]: I0909 00:35:53.982396 1573 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:35:53.983222 kubelet[1573]: I0909 00:35:53.983203 1573 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:35:53.983377 kubelet[1573]: W0909 00:35:53.983362 1573 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:35:53.987401 kubelet[1573]: I0909 00:35:53.987364 1573 server.go:1274] "Started kubelet" Sep 9 00:35:53.988836 kubelet[1573]: I0909 00:35:53.988788 1573 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:35:53.997320 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 9 00:35:53.997541 kubelet[1573]: I0909 00:35:53.997453 1573 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:35:53.997737 kubelet[1573]: I0909 00:35:53.997616 1573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:35:53.997959 kubelet[1573]: I0909 00:35:53.997933 1573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:35:53.998860 kubelet[1573]: I0909 00:35:53.998783 1573 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:35:53.998942 kubelet[1573]: I0909 00:35:53.998889 1573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:35:53.999109 kubelet[1573]: E0909 00:35:53.999076 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:53.999223 kubelet[1573]: I0909 00:35:53.999192 1573 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:35:53.999891 kubelet[1573]: I0909 00:35:53.999857 1573 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:35:53.999957 kubelet[1573]: I0909 00:35:53.999928 1573 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:35:54.000999 kubelet[1573]: W0909 00:35:54.000950 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Sep 9 00:35:54.001075 kubelet[1573]: E0909 00:35:54.001007 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:35:54.001110 kubelet[1573]: E0909 00:35:54.001060 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Sep 9 00:35:54.001894 kubelet[1573]: I0909 00:35:54.001707 1573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:35:54.003806 kubelet[1573]: E0909 00:35:54.002372 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863761ebc726ca1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:35:53.987337377 +0000 UTC m=+1.053059682,LastTimestamp:2025-09-09 00:35:53.987337377 +0000 UTC m=+1.053059682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:35:54.004140 kubelet[1573]: I0909 00:35:54.004120 1573 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:35:54.004215 kubelet[1573]: I0909 00:35:54.004205 1573 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:35:54.005380 kubelet[1573]: E0909 00:35:54.005355 1573 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:35:54.015786 kubelet[1573]: I0909 00:35:54.015743 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:35:54.016764 kubelet[1573]: I0909 00:35:54.016719 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:35:54.016764 kubelet[1573]: I0909 00:35:54.016766 1573 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:35:54.016855 kubelet[1573]: I0909 00:35:54.016785 1573 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:35:54.016855 kubelet[1573]: E0909 00:35:54.016831 1573 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:35:54.021755 kubelet[1573]: I0909 00:35:54.021713 1573 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:35:54.021755 kubelet[1573]: I0909 00:35:54.021739 1573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:35:54.021755 kubelet[1573]: I0909 00:35:54.021760 1573 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:35:54.022533 kubelet[1573]: W0909 00:35:54.022474 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Sep 9 00:35:54.022628 kubelet[1573]: E0909 00:35:54.022532 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:35:54.058959 kubelet[1573]: I0909 00:35:54.058913 1573 policy_none.go:49] "None policy: Start" Sep 9 00:35:54.059761 kubelet[1573]: I0909 00:35:54.059738 1573 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:35:54.059802 kubelet[1573]: I0909 00:35:54.059768 1573 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:35:54.067281 systemd[1]: Created slice kubepods.slice. Sep 9 00:35:54.071893 systemd[1]: Created slice kubepods-burstable.slice. Sep 9 00:35:54.074957 systemd[1]: Created slice kubepods-besteffort.slice. Sep 9 00:35:54.089433 kubelet[1573]: I0909 00:35:54.089399 1573 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:35:54.089660 kubelet[1573]: I0909 00:35:54.089586 1573 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:35:54.089660 kubelet[1573]: I0909 00:35:54.089603 1573 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:35:54.089939 kubelet[1573]: I0909 00:35:54.089873 1573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:35:54.090947 kubelet[1573]: E0909 00:35:54.090926 1573 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:35:54.124392 systemd[1]: Created slice kubepods-burstable-pod4487195526916549618860bc052eeda5.slice. Sep 9 00:35:54.143613 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 00:35:54.155466 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 00:35:54.191950 kubelet[1573]: I0909 00:35:54.191861 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:35:54.193379 kubelet[1573]: E0909 00:35:54.193345 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Sep 9 00:35:54.202017 kubelet[1573]: E0909 00:35:54.201976 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Sep 9 00:35:54.301680 kubelet[1573]: I0909 00:35:54.301609 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4487195526916549618860bc052eeda5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4487195526916549618860bc052eeda5\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:35:54.301680 kubelet[1573]: I0909 00:35:54.301670 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4487195526916549618860bc052eeda5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4487195526916549618860bc052eeda5\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:35:54.301779 kubelet[1573]: I0909 00:35:54.301689 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:54.301779 kubelet[1573]: I0909 00:35:54.301709 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4487195526916549618860bc052eeda5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4487195526916549618860bc052eeda5\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:35:54.301779 kubelet[1573]: I0909 00:35:54.301759 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:54.301849 kubelet[1573]: I0909 00:35:54.301806 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:54.301849 kubelet[1573]: I0909 00:35:54.301826 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:54.301849 kubelet[1573]: I0909 00:35:54.301844 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:54.301909 kubelet[1573]: I0909 00:35:54.301861 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:35:54.395080 kubelet[1573]: I0909 00:35:54.395032 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:35:54.395404 kubelet[1573]: E0909 00:35:54.395366 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Sep 9 00:35:54.442141 kubelet[1573]: E0909 00:35:54.442054 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:54.442942 env[1216]: time="2025-09-09T00:35:54.442900748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4487195526916549618860bc052eeda5,Namespace:kube-system,Attempt:0,}" Sep 9 00:35:54.454566 kubelet[1573]: E0909 00:35:54.454539 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:54.455233 env[1216]: time="2025-09-09T00:35:54.454962891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:35:54.457638 kubelet[1573]: E0909 00:35:54.457612 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:54.458295 env[1216]: time="2025-09-09T00:35:54.457960415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:35:54.603096 kubelet[1573]: E0909 00:35:54.603034 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Sep 9 00:35:54.798657 kubelet[1573]: I0909 00:35:54.798563 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:35:54.799640 kubelet[1573]: E0909 00:35:54.799593 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Sep 9 00:35:54.938356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786199361.mount: Deactivated successfully. Sep 9 00:35:54.943310 env[1216]: time="2025-09-09T00:35:54.943270051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.950216 env[1216]: time="2025-09-09T00:35:54.950180228Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.951470 env[1216]: time="2025-09-09T00:35:54.951320344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.952360 env[1216]: time="2025-09-09T00:35:54.952325019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.954296 env[1216]: time="2025-09-09T00:35:54.954261762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.956937 env[1216]: time="2025-09-09T00:35:54.956909991Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.959456 env[1216]: time="2025-09-09T00:35:54.959070720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.959765 env[1216]: time="2025-09-09T00:35:54.959735590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.961375 env[1216]: time="2025-09-09T00:35:54.961344504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.962891 env[1216]: time="2025-09-09T00:35:54.962863910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.967248 env[1216]: time="2025-09-09T00:35:54.967220330Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.969035 env[1216]: time="2025-09-09T00:35:54.968970411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:35:54.975220 env[1216]: time="2025-09-09T00:35:54.974995336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:35:54.975220 env[1216]: time="2025-09-09T00:35:54.975032900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:35:54.975220 env[1216]: time="2025-09-09T00:35:54.975043473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:35:54.975690 env[1216]: time="2025-09-09T00:35:54.975246474Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5defe45141b8853fc65e964059d0243ed326b17cc45ff6459d1c469cab62163 pid=1614 runtime=io.containerd.runc.v2 Sep 9 00:35:54.991468 systemd[1]: Started cri-containerd-c5defe45141b8853fc65e964059d0243ed326b17cc45ff6459d1c469cab62163.scope. Sep 9 00:35:54.993377 env[1216]: time="2025-09-09T00:35:54.992892697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:35:54.993377 env[1216]: time="2025-09-09T00:35:54.993212958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:35:54.993377 env[1216]: time="2025-09-09T00:35:54.993224612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:35:54.994755 env[1216]: time="2025-09-09T00:35:54.993628412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e35a575ae9e9e78d8f3bfbc9e7076b9e068e3c27b08399eaa203fe63de44d2e3 pid=1644 runtime=io.containerd.runc.v2 Sep 9 00:35:54.998093 env[1216]: time="2025-09-09T00:35:54.998020475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:35:54.998172 env[1216]: time="2025-09-09T00:35:54.998100249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:35:54.998172 env[1216]: time="2025-09-09T00:35:54.998136813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:35:54.998354 env[1216]: time="2025-09-09T00:35:54.998298165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31b78c7fb7991ba29229b5045f90ad043ce2078795ca8f9d842241037e6b5a1e pid=1654 runtime=io.containerd.runc.v2 Sep 9 00:35:55.001872 kubelet[1573]: W0909 00:35:55.001778 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Sep 9 00:35:55.001872 kubelet[1573]: E0909 00:35:55.001845 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:35:55.013271 systemd[1]: Started cri-containerd-e35a575ae9e9e78d8f3bfbc9e7076b9e068e3c27b08399eaa203fe63de44d2e3.scope. Sep 9 00:35:55.028515 systemd[1]: Started cri-containerd-31b78c7fb7991ba29229b5045f90ad043ce2078795ca8f9d842241037e6b5a1e.scope. Sep 9 00:35:55.038284 env[1216]: time="2025-09-09T00:35:55.038236461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4487195526916549618860bc052eeda5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5defe45141b8853fc65e964059d0243ed326b17cc45ff6459d1c469cab62163\"" Sep 9 00:35:55.039384 kubelet[1573]: E0909 00:35:55.039351 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:55.041223 env[1216]: time="2025-09-09T00:35:55.041190335Z" level=info msg="CreateContainer within sandbox \"c5defe45141b8853fc65e964059d0243ed326b17cc45ff6459d1c469cab62163\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:35:55.056527 env[1216]: time="2025-09-09T00:35:55.056436520Z" level=info msg="CreateContainer within sandbox \"c5defe45141b8853fc65e964059d0243ed326b17cc45ff6459d1c469cab62163\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ffef3b498da9017e4d84ac4c967e5fba17dc9abe6495c974d8681adbc1ad16b\"" Sep 9 00:35:55.058584 env[1216]: time="2025-09-09T00:35:55.058138852Z" level=info msg="StartContainer for \"8ffef3b498da9017e4d84ac4c967e5fba17dc9abe6495c974d8681adbc1ad16b\"" Sep 9 00:35:55.068034 env[1216]: time="2025-09-09T00:35:55.068001715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"31b78c7fb7991ba29229b5045f90ad043ce2078795ca8f9d842241037e6b5a1e\"" Sep 9 00:35:55.069059 kubelet[1573]: E0909 00:35:55.069029 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:55.069586 env[1216]: time="2025-09-09T00:35:55.069559256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"e35a575ae9e9e78d8f3bfbc9e7076b9e068e3c27b08399eaa203fe63de44d2e3\"" Sep 9 00:35:55.070569 kubelet[1573]: E0909 00:35:55.070551 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:55.071359 env[1216]: time="2025-09-09T00:35:55.071325293Z" level=info msg="CreateContainer within sandbox \"31b78c7fb7991ba29229b5045f90ad043ce2078795ca8f9d842241037e6b5a1e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:35:55.072138 env[1216]: time="2025-09-09T00:35:55.072110150Z" level=info msg="CreateContainer within sandbox \"e35a575ae9e9e78d8f3bfbc9e7076b9e068e3c27b08399eaa203fe63de44d2e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:35:55.082669 systemd[1]: Started cri-containerd-8ffef3b498da9017e4d84ac4c967e5fba17dc9abe6495c974d8681adbc1ad16b.scope. Sep 9 00:35:55.093854 env[1216]: time="2025-09-09T00:35:55.093811573Z" level=info msg="CreateContainer within sandbox \"e35a575ae9e9e78d8f3bfbc9e7076b9e068e3c27b08399eaa203fe63de44d2e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"494d71d348c92645e4b6ea258dac2c70015f24f9b24af7e1e11059110d98feb8\"" Sep 9 00:35:55.095198 env[1216]: time="2025-09-09T00:35:55.095166703Z" level=info msg="CreateContainer within sandbox \"31b78c7fb7991ba29229b5045f90ad043ce2078795ca8f9d842241037e6b5a1e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"49166eef8e5d97316209cefad74e72d2ce4d44363cb4d51ce262b85676940704\"" Sep 9 00:35:55.095274 env[1216]: time="2025-09-09T00:35:55.095191409Z" level=info msg="StartContainer for \"494d71d348c92645e4b6ea258dac2c70015f24f9b24af7e1e11059110d98feb8\"" Sep 9 00:35:55.095568 env[1216]: time="2025-09-09T00:35:55.095541333Z" level=info msg="StartContainer for \"49166eef8e5d97316209cefad74e72d2ce4d44363cb4d51ce262b85676940704\"" Sep 9 00:35:55.114706 systemd[1]: Started cri-containerd-494d71d348c92645e4b6ea258dac2c70015f24f9b24af7e1e11059110d98feb8.scope. Sep 9 00:35:55.123603 systemd[1]: Started cri-containerd-49166eef8e5d97316209cefad74e72d2ce4d44363cb4d51ce262b85676940704.scope. Sep 9 00:35:55.124571 env[1216]: time="2025-09-09T00:35:55.124247805Z" level=info msg="StartContainer for \"8ffef3b498da9017e4d84ac4c967e5fba17dc9abe6495c974d8681adbc1ad16b\" returns successfully" Sep 9 00:35:55.168119 env[1216]: time="2025-09-09T00:35:55.168072409Z" level=info msg="StartContainer for \"494d71d348c92645e4b6ea258dac2c70015f24f9b24af7e1e11059110d98feb8\" returns successfully" Sep 9 00:35:55.174505 env[1216]: time="2025-09-09T00:35:55.174460617Z" level=info msg="StartContainer for \"49166eef8e5d97316209cefad74e72d2ce4d44363cb4d51ce262b85676940704\" returns successfully" Sep 9 00:35:55.601679 kubelet[1573]: I0909 00:35:55.601631 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:35:56.027718 kubelet[1573]: E0909 00:35:56.027617 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:56.029990 kubelet[1573]: E0909 00:35:56.029969 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:56.032198 kubelet[1573]: E0909 00:35:56.032177 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:56.744879 kubelet[1573]: I0909 00:35:56.744842 1573 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:35:56.744879 kubelet[1573]: E0909 00:35:56.744879 1573 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:35:56.768735 kubelet[1573]: E0909 00:35:56.768704 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:56.868857 kubelet[1573]: E0909 00:35:56.868798 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:56.969708 kubelet[1573]: E0909 00:35:56.969667 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:57.033954 kubelet[1573]: E0909 00:35:57.033869 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:57.069897 kubelet[1573]: E0909 00:35:57.069864 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:57.170782 kubelet[1573]: E0909 00:35:57.170733 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:57.271374 kubelet[1573]: E0909 00:35:57.271333 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:57.688360 kubelet[1573]: E0909 00:35:57.688280 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:57.981218 kubelet[1573]: I0909 00:35:57.981106 1573 apiserver.go:52] "Watching apiserver" Sep 9 00:35:58.000110 kubelet[1573]: I0909 00:35:58.000055 1573 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:35:58.034415 kubelet[1573]: E0909 00:35:58.034381 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:58.876523 systemd[1]: Reloading. Sep 9 00:35:58.935495 /usr/lib/systemd/system-generators/torcx-generator[1874]: time="2025-09-09T00:35:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:35:58.935526 /usr/lib/systemd/system-generators/torcx-generator[1874]: time="2025-09-09T00:35:58Z" level=info msg="torcx already run" Sep 9 00:35:59.004288 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:35:59.004332 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:35:59.021470 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:35:59.103745 systemd[1]: Stopping kubelet.service... Sep 9 00:35:59.129026 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:35:59.129223 systemd[1]: Stopped kubelet.service. Sep 9 00:35:59.129272 systemd[1]: kubelet.service: Consumed 1.392s CPU time. Sep 9 00:35:59.130931 systemd[1]: Starting kubelet.service... Sep 9 00:35:59.222118 systemd[1]: Started kubelet.service. Sep 9 00:35:59.260299 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:35:59.260299 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:35:59.260299 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:35:59.260660 kubelet[1916]: I0909 00:35:59.260371 1916 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:35:59.271094 kubelet[1916]: I0909 00:35:59.271048 1916 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:35:59.271094 kubelet[1916]: I0909 00:35:59.271080 1916 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:35:59.271729 kubelet[1916]: I0909 00:35:59.271701 1916 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:35:59.274111 kubelet[1916]: I0909 00:35:59.274084 1916 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:35:59.276307 kubelet[1916]: I0909 00:35:59.276263 1916 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:35:59.280647 kubelet[1916]: E0909 00:35:59.280593 1916 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:35:59.280647 kubelet[1916]: I0909 00:35:59.280625 1916 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:35:59.283183 kubelet[1916]: I0909 00:35:59.283165 1916 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:35:59.283291 kubelet[1916]: I0909 00:35:59.283277 1916 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:35:59.283401 kubelet[1916]: I0909 00:35:59.283379 1916 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:35:59.283564 kubelet[1916]: I0909 00:35:59.283404 1916 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:35:59.283643 kubelet[1916]: I0909 00:35:59.283570 1916 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:35:59.283643 kubelet[1916]: I0909 00:35:59.283580 1916 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:35:59.283643 kubelet[1916]: I0909 00:35:59.283614 1916 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:35:59.283747 kubelet[1916]: I0909 00:35:59.283734 1916 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:35:59.283775 kubelet[1916]: I0909 00:35:59.283755 1916 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:35:59.283775 kubelet[1916]: I0909 00:35:59.283773 1916 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:35:59.283821 kubelet[1916]: I0909 00:35:59.283788 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:35:59.284746 kubelet[1916]: I0909 00:35:59.284721 1916 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:35:59.285359 kubelet[1916]: I0909 00:35:59.285322 1916 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:35:59.285857 kubelet[1916]: I0909 00:35:59.285836 1916 server.go:1274] "Started kubelet" Sep 9 00:35:59.293902 kubelet[1916]: I0909 00:35:59.293868 1916 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:35:59.295524 kubelet[1916]: I0909 00:35:59.295503 1916 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:35:59.296345 kubelet[1916]: I0909 00:35:59.296329 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:35:59.296687 kubelet[1916]: I0909 00:35:59.296667 1916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:35:59.296959 kubelet[1916]: I0909 00:35:59.296946 1916 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:35:59.297084 kubelet[1916]: E0909 00:35:59.297068 1916 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:35:59.297456 kubelet[1916]: I0909 00:35:59.297440 1916 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:35:59.297582 kubelet[1916]: I0909 00:35:59.297571 1916 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:35:59.299220 kubelet[1916]: I0909 00:35:59.299169 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:35:59.299467 kubelet[1916]: I0909 00:35:59.299446 1916 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:35:59.302187 kubelet[1916]: I0909 00:35:59.302156 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:35:59.303179 kubelet[1916]: E0909 00:35:59.303141 1916 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:35:59.304298 kubelet[1916]: I0909 00:35:59.304272 1916 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:35:59.304298 kubelet[1916]: I0909 00:35:59.304292 1916 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:35:59.313441 kubelet[1916]: I0909 00:35:59.313398 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:35:59.314337 kubelet[1916]: I0909 00:35:59.314307 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:35:59.314337 kubelet[1916]: I0909 00:35:59.314329 1916 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:35:59.314407 kubelet[1916]: I0909 00:35:59.314347 1916 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:35:59.314407 kubelet[1916]: E0909 00:35:59.314390 1916 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:35:59.351682 kubelet[1916]: I0909 00:35:59.351653 1916 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:35:59.351818 kubelet[1916]: I0909 00:35:59.351804 1916 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:35:59.351879 kubelet[1916]: I0909 00:35:59.351869 1916 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:35:59.352109 kubelet[1916]: I0909 00:35:59.352088 1916 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:35:59.352195 kubelet[1916]: I0909 00:35:59.352171 1916 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:35:59.352246 kubelet[1916]: I0909 00:35:59.352238 1916 policy_none.go:49] "None policy: Start" Sep 9 00:35:59.352859 kubelet[1916]: I0909 00:35:59.352847 1916 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:35:59.352935 kubelet[1916]: I0909 00:35:59.352926 1916 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:35:59.353161 kubelet[1916]: I0909 00:35:59.353141 1916 state_mem.go:75] "Updated machine memory state" Sep 9 00:35:59.356733 kubelet[1916]: I0909 00:35:59.356711 1916 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:35:59.356904 kubelet[1916]: I0909 00:35:59.356887 1916 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:35:59.357031 kubelet[1916]: I0909 00:35:59.356996 1916 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:35:59.357344 kubelet[1916]: I0909 00:35:59.357333 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:35:59.421453 kubelet[1916]: E0909 00:35:59.421359 1916 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:59.463477 kubelet[1916]: I0909 00:35:59.463436 1916 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:35:59.470134 kubelet[1916]: I0909 00:35:59.470083 1916 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:35:59.470249 kubelet[1916]: I0909 00:35:59.470176 1916 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:35:59.598905 kubelet[1916]: I0909 00:35:59.598866 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:59.598905 kubelet[1916]: I0909 00:35:59.598906 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:59.599084 kubelet[1916]: I0909 00:35:59.598932 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:35:59.599084 kubelet[1916]: I0909 00:35:59.598950 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4487195526916549618860bc052eeda5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4487195526916549618860bc052eeda5\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:35:59.599084 kubelet[1916]: I0909 00:35:59.598969 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4487195526916549618860bc052eeda5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4487195526916549618860bc052eeda5\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:35:59.599084 kubelet[1916]: I0909 00:35:59.598990 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:59.599084 kubelet[1916]: I0909 00:35:59.599029 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:59.599208 kubelet[1916]: I0909 00:35:59.599074 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:35:59.599208 kubelet[1916]: I0909 00:35:59.599095 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4487195526916549618860bc052eeda5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4487195526916549618860bc052eeda5\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:35:59.721481 kubelet[1916]: E0909 00:35:59.721377 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:59.721481 kubelet[1916]: E0909 00:35:59.721414 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:59.722353 kubelet[1916]: E0909 00:35:59.721757 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:35:59.861827 sudo[1951]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:35:59.862052 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 9 00:36:00.284547 kubelet[1916]: I0909 00:36:00.284519 1916 apiserver.go:52] "Watching apiserver" Sep 9 00:36:00.297913 kubelet[1916]: I0909 00:36:00.297894 1916 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:36:00.332655 sudo[1951]: pam_unix(sudo:session): session closed for user root Sep 9 00:36:00.334737 kubelet[1916]: E0909 00:36:00.334712 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:00.335628 kubelet[1916]: E0909 00:36:00.335597 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:00.344571 kubelet[1916]: E0909 00:36:00.344275 1916 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:36:00.344855 kubelet[1916]: E0909 00:36:00.344839 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:00.364480 kubelet[1916]: I0909 00:36:00.364428 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.364413093 podStartE2EDuration="1.364413093s" podCreationTimestamp="2025-09-09 00:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:00.35736613 +0000 UTC m=+1.131943806" watchObservedRunningTime="2025-09-09 00:36:00.364413093 +0000 UTC m=+1.138990649" Sep 9 00:36:00.371362 kubelet[1916]: I0909 00:36:00.371313 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.37129945 podStartE2EDuration="3.37129945s" podCreationTimestamp="2025-09-09 00:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:00.364886786 +0000 UTC m=+1.139464342" watchObservedRunningTime="2025-09-09 00:36:00.37129945 +0000 UTC m=+1.145877006" Sep 9 00:36:01.336094 kubelet[1916]: E0909 00:36:01.336033 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:01.336602 kubelet[1916]: E0909 00:36:01.336567 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:01.516380 kubelet[1916]: E0909 00:36:01.516346 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:02.166977 sudo[1319]: pam_unix(sudo:session): session closed for user root Sep 9 00:36:02.169099 sshd[1316]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:02.172226 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:56628.service: Deactivated successfully. Sep 9 00:36:02.173045 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:36:02.173210 systemd[1]: session-5.scope: Consumed 6.227s CPU time. Sep 9 00:36:02.173701 systemd-logind[1206]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:36:02.174717 systemd-logind[1206]: Removed session 5. Sep 9 00:36:06.276527 kubelet[1916]: I0909 00:36:06.276491 1916 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:36:06.277569 env[1216]: time="2025-09-09T00:36:06.277532861Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:36:06.277825 kubelet[1916]: I0909 00:36:06.277743 1916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:36:06.995314 kubelet[1916]: I0909 00:36:06.995244 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.995229445 podStartE2EDuration="7.995229445s" podCreationTimestamp="2025-09-09 00:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:00.371732601 +0000 UTC m=+1.146310157" watchObservedRunningTime="2025-09-09 00:36:06.995229445 +0000 UTC m=+7.769807001" Sep 9 00:36:07.001536 systemd[1]: Created slice kubepods-besteffort-pod68c769b1_844d_41f0_a440_afb4bdb7d716.slice. Sep 9 00:36:07.016081 kubelet[1916]: W0909 00:36:07.016042 1916 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 00:36:07.016227 kubelet[1916]: E0909 00:36:07.016083 1916 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:36:07.019420 systemd[1]: Created slice kubepods-burstable-podaa63bef4_e7f6_4f50_aa14_8a7a52305c96.slice. Sep 9 00:36:07.049906 kubelet[1916]: I0909 00:36:07.049863 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cni-path\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050107 kubelet[1916]: I0909 00:36:07.050090 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hubble-tls\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050189 kubelet[1916]: I0909 00:36:07.050172 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-cgroup\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050317 kubelet[1916]: I0909 00:36:07.050268 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlb7m\" (UniqueName: \"kubernetes.io/projected/68c769b1-844d-41f0-a440-afb4bdb7d716-kube-api-access-xlb7m\") pod \"kube-proxy-bhqss\" (UID: \"68c769b1-844d-41f0-a440-afb4bdb7d716\") " pod="kube-system/kube-proxy-bhqss" Sep 9 00:36:07.050399 kubelet[1916]: I0909 00:36:07.050385 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68c769b1-844d-41f0-a440-afb4bdb7d716-lib-modules\") pod \"kube-proxy-bhqss\" (UID: \"68c769b1-844d-41f0-a440-afb4bdb7d716\") " pod="kube-system/kube-proxy-bhqss" Sep 9 00:36:07.050468 kubelet[1916]: I0909 00:36:07.050456 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-etc-cni-netd\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050536 kubelet[1916]: I0909 00:36:07.050523 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-clustermesh-secrets\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050616 kubelet[1916]: I0909 00:36:07.050602 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-config-path\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050725 kubelet[1916]: I0909 00:36:07.050711 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68c769b1-844d-41f0-a440-afb4bdb7d716-xtables-lock\") pod \"kube-proxy-bhqss\" (UID: \"68c769b1-844d-41f0-a440-afb4bdb7d716\") " pod="kube-system/kube-proxy-bhqss" Sep 9 00:36:07.050800 kubelet[1916]: I0909 00:36:07.050787 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-net\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050869 kubelet[1916]: I0909 00:36:07.050858 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-lib-modules\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.050946 kubelet[1916]: I0909 00:36:07.050934 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68c769b1-844d-41f0-a440-afb4bdb7d716-kube-proxy\") pod \"kube-proxy-bhqss\" (UID: \"68c769b1-844d-41f0-a440-afb4bdb7d716\") " pod="kube-system/kube-proxy-bhqss" Sep 9 00:36:07.051019 kubelet[1916]: I0909 00:36:07.051006 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hostproc\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.051091 kubelet[1916]: I0909 00:36:07.051078 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-xtables-lock\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.051158 kubelet[1916]: I0909 00:36:07.051146 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-kernel\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.051229 kubelet[1916]: I0909 00:36:07.051216 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-bpf-maps\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.051297 kubelet[1916]: I0909 00:36:07.051286 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-run\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.051371 kubelet[1916]: I0909 00:36:07.051358 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5t9\" (UniqueName: \"kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-kube-api-access-pp5t9\") pod \"cilium-kmxxb\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " pod="kube-system/cilium-kmxxb" Sep 9 00:36:07.153767 kubelet[1916]: I0909 00:36:07.153729 1916 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 9 00:36:07.166080 kubelet[1916]: E0909 00:36:07.166051 1916 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:36:07.166249 kubelet[1916]: E0909 00:36:07.166237 1916 projected.go:194] Error preparing data for projected volume kube-api-access-pp5t9 for pod kube-system/cilium-kmxxb: configmap "kube-root-ca.crt" not found Sep 9 00:36:07.166356 kubelet[1916]: E0909 00:36:07.166343 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-kube-api-access-pp5t9 podName:aa63bef4-e7f6-4f50-aa14-8a7a52305c96 nodeName:}" failed. No retries permitted until 2025-09-09 00:36:07.666324074 +0000 UTC m=+8.440901630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pp5t9" (UniqueName: "kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-kube-api-access-pp5t9") pod "cilium-kmxxb" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96") : configmap "kube-root-ca.crt" not found Sep 9 00:36:07.167505 kubelet[1916]: E0909 00:36:07.167470 1916 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:36:07.167626 kubelet[1916]: E0909 00:36:07.167612 1916 projected.go:194] Error preparing data for projected volume kube-api-access-xlb7m for pod kube-system/kube-proxy-bhqss: configmap "kube-root-ca.crt" not found Sep 9 00:36:07.167761 kubelet[1916]: E0909 00:36:07.167749 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68c769b1-844d-41f0-a440-afb4bdb7d716-kube-api-access-xlb7m podName:68c769b1-844d-41f0-a440-afb4bdb7d716 nodeName:}" failed. No retries permitted until 2025-09-09 00:36:07.667734116 +0000 UTC m=+8.442311672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xlb7m" (UniqueName: "kubernetes.io/projected/68c769b1-844d-41f0-a440-afb4bdb7d716-kube-api-access-xlb7m") pod "kube-proxy-bhqss" (UID: "68c769b1-844d-41f0-a440-afb4bdb7d716") : configmap "kube-root-ca.crt" not found Sep 9 00:36:07.343012 systemd[1]: Created slice kubepods-besteffort-pod5a436b0f_fc12_4c27_afb8_8f0f31fedab1.slice. Sep 9 00:36:07.354839 kubelet[1916]: I0909 00:36:07.354791 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-cilium-config-path\") pod \"cilium-operator-5d85765b45-bffx8\" (UID: \"5a436b0f-fc12-4c27-afb8-8f0f31fedab1\") " pod="kube-system/cilium-operator-5d85765b45-bffx8" Sep 9 00:36:07.355151 kubelet[1916]: I0909 00:36:07.354850 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwzq8\" (UniqueName: \"kubernetes.io/projected/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-kube-api-access-wwzq8\") pod \"cilium-operator-5d85765b45-bffx8\" (UID: \"5a436b0f-fc12-4c27-afb8-8f0f31fedab1\") " pod="kube-system/cilium-operator-5d85765b45-bffx8" Sep 9 00:36:07.646602 kubelet[1916]: E0909 00:36:07.646494 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:07.648077 env[1216]: time="2025-09-09T00:36:07.647659551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bffx8,Uid:5a436b0f-fc12-4c27-afb8-8f0f31fedab1,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:07.662239 env[1216]: time="2025-09-09T00:36:07.662171325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:07.662239 env[1216]: time="2025-09-09T00:36:07.662214220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:07.662399 env[1216]: time="2025-09-09T00:36:07.662366037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:07.662606 env[1216]: time="2025-09-09T00:36:07.662569152Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631 pid=2006 runtime=io.containerd.runc.v2 Sep 9 00:36:07.675696 systemd[1]: Started cri-containerd-4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631.scope. Sep 9 00:36:07.712668 env[1216]: time="2025-09-09T00:36:07.712613803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bffx8,Uid:5a436b0f-fc12-4c27-afb8-8f0f31fedab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631\"" Sep 9 00:36:07.713863 kubelet[1916]: E0909 00:36:07.713842 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:07.715181 env[1216]: time="2025-09-09T00:36:07.715147862Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:36:07.909763 kubelet[1916]: E0909 00:36:07.909237 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:07.910139 env[1216]: time="2025-09-09T00:36:07.909981568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhqss,Uid:68c769b1-844d-41f0-a440-afb4bdb7d716,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:07.927553 env[1216]: time="2025-09-09T00:36:07.927475046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:07.927678 env[1216]: time="2025-09-09T00:36:07.927562198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:07.927678 env[1216]: time="2025-09-09T00:36:07.927588968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:07.927825 env[1216]: time="2025-09-09T00:36:07.927773436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb5fdaa678e13a8618d4e0a6df886e9b5e1bb1da6c0863b449a9dffb4f4f5821 pid=2048 runtime=io.containerd.runc.v2 Sep 9 00:36:07.939300 systemd[1]: Started cri-containerd-eb5fdaa678e13a8618d4e0a6df886e9b5e1bb1da6c0863b449a9dffb4f4f5821.scope. Sep 9 00:36:07.967706 env[1216]: time="2025-09-09T00:36:07.967620591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhqss,Uid:68c769b1-844d-41f0-a440-afb4bdb7d716,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb5fdaa678e13a8618d4e0a6df886e9b5e1bb1da6c0863b449a9dffb4f4f5821\"" Sep 9 00:36:07.968650 kubelet[1916]: E0909 00:36:07.968321 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:07.971219 env[1216]: time="2025-09-09T00:36:07.971187512Z" level=info msg="CreateContainer within sandbox \"eb5fdaa678e13a8618d4e0a6df886e9b5e1bb1da6c0863b449a9dffb4f4f5821\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:36:07.983579 env[1216]: time="2025-09-09T00:36:07.983542087Z" level=info msg="CreateContainer within sandbox \"eb5fdaa678e13a8618d4e0a6df886e9b5e1bb1da6c0863b449a9dffb4f4f5821\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1f8c734d6faa9d1f80476f57792182aab7690bc224a81b2a52d1e5edf6087aa\"" Sep 9 00:36:07.984132 env[1216]: time="2025-09-09T00:36:07.984105295Z" level=info msg="StartContainer for \"a1f8c734d6faa9d1f80476f57792182aab7690bc224a81b2a52d1e5edf6087aa\"" Sep 9 00:36:08.002669 systemd[1]: Started cri-containerd-a1f8c734d6faa9d1f80476f57792182aab7690bc224a81b2a52d1e5edf6087aa.scope. Sep 9 00:36:08.034761 env[1216]: time="2025-09-09T00:36:08.034713124Z" level=info msg="StartContainer for \"a1f8c734d6faa9d1f80476f57792182aab7690bc224a81b2a52d1e5edf6087aa\" returns successfully" Sep 9 00:36:08.161487 systemd[1]: run-containerd-runc-k8s.io-4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631-runc.lmaADT.mount: Deactivated successfully. Sep 9 00:36:08.222322 kubelet[1916]: E0909 00:36:08.222280 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:08.222723 env[1216]: time="2025-09-09T00:36:08.222683660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmxxb,Uid:aa63bef4-e7f6-4f50-aa14-8a7a52305c96,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:08.238256 env[1216]: time="2025-09-09T00:36:08.238188696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:08.238256 env[1216]: time="2025-09-09T00:36:08.238234752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:08.238256 env[1216]: time="2025-09-09T00:36:08.238245596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:08.238411 env[1216]: time="2025-09-09T00:36:08.238365278Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810 pid=2165 runtime=io.containerd.runc.v2 Sep 9 00:36:08.256683 systemd[1]: run-containerd-runc-k8s.io-af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810-runc.4kz38D.mount: Deactivated successfully. Sep 9 00:36:08.258488 systemd[1]: Started cri-containerd-af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810.scope. Sep 9 00:36:08.286669 env[1216]: time="2025-09-09T00:36:08.286001137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmxxb,Uid:aa63bef4-e7f6-4f50-aa14-8a7a52305c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\"" Sep 9 00:36:08.287535 kubelet[1916]: E0909 00:36:08.287507 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:08.349352 kubelet[1916]: E0909 00:36:08.349256 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:08.374164 kubelet[1916]: I0909 00:36:08.373743 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bhqss" podStartSLOduration=2.37372529 podStartE2EDuration="2.37372529s" podCreationTimestamp="2025-09-09 00:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:08.37249694 +0000 UTC m=+9.147074496" watchObservedRunningTime="2025-09-09 00:36:08.37372529 +0000 UTC m=+9.148302846" Sep 9 00:36:09.402498 env[1216]: time="2025-09-09T00:36:09.402442510Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:09.403996 env[1216]: time="2025-09-09T00:36:09.403956413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:09.405368 env[1216]: time="2025-09-09T00:36:09.405336431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:09.406022 env[1216]: time="2025-09-09T00:36:09.405990608Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:36:09.407538 env[1216]: time="2025-09-09T00:36:09.407504071Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:36:09.408884 env[1216]: time="2025-09-09T00:36:09.408844836Z" level=info msg="CreateContainer within sandbox \"4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:36:09.419661 env[1216]: time="2025-09-09T00:36:09.417929733Z" level=info msg="CreateContainer within sandbox \"4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\"" Sep 9 00:36:09.419661 env[1216]: time="2025-09-09T00:36:09.418711713Z" level=info msg="StartContainer for \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\"" Sep 9 00:36:09.419548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095431466.mount: Deactivated successfully. Sep 9 00:36:09.429876 kubelet[1916]: E0909 00:36:09.429852 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:09.455789 systemd[1]: Started cri-containerd-3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e.scope. Sep 9 00:36:09.483681 env[1216]: time="2025-09-09T00:36:09.482799994Z" level=info msg="StartContainer for \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\" returns successfully" Sep 9 00:36:10.354337 kubelet[1916]: E0909 00:36:10.354306 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:10.354489 kubelet[1916]: E0909 00:36:10.354368 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:10.367307 kubelet[1916]: I0909 00:36:10.367252 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bffx8" podStartSLOduration=1.674820525 podStartE2EDuration="3.36723601s" podCreationTimestamp="2025-09-09 00:36:07 +0000 UTC" firstStartedPulling="2025-09-09 00:36:07.714726465 +0000 UTC m=+8.489304022" lastFinishedPulling="2025-09-09 00:36:09.407141991 +0000 UTC m=+10.181719507" observedRunningTime="2025-09-09 00:36:10.366474171 +0000 UTC m=+11.141051687" watchObservedRunningTime="2025-09-09 00:36:10.36723601 +0000 UTC m=+11.141813566" Sep 9 00:36:11.183427 kubelet[1916]: E0909 00:36:11.183357 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:11.356378 kubelet[1916]: E0909 00:36:11.356349 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:11.523390 kubelet[1916]: E0909 00:36:11.523292 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:14.985897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137216428.mount: Deactivated successfully. Sep 9 00:36:14.989262 update_engine[1207]: I0909 00:36:14.989212 1207 update_attempter.cc:509] Updating boot flags... Sep 9 00:36:17.294570 env[1216]: time="2025-09-09T00:36:17.294504223Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:17.296869 env[1216]: time="2025-09-09T00:36:17.296831375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:17.298285 env[1216]: time="2025-09-09T00:36:17.298251208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:17.298781 env[1216]: time="2025-09-09T00:36:17.298745637Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:36:17.302435 env[1216]: time="2025-09-09T00:36:17.302399201Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:36:17.313402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626423660.mount: Deactivated successfully. Sep 9 00:36:17.315111 env[1216]: time="2025-09-09T00:36:17.315044265Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2\"" Sep 9 00:36:17.317149 env[1216]: time="2025-09-09T00:36:17.316745760Z" level=info msg="StartContainer for \"31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2\"" Sep 9 00:36:17.337488 systemd[1]: Started cri-containerd-31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2.scope. Sep 9 00:36:17.383792 systemd[1]: cri-containerd-31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2.scope: Deactivated successfully. Sep 9 00:36:17.412794 env[1216]: time="2025-09-09T00:36:17.412748256Z" level=info msg="StartContainer for \"31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2\" returns successfully" Sep 9 00:36:17.529486 env[1216]: time="2025-09-09T00:36:17.529439186Z" level=info msg="shim disconnected" id=31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2 Sep 9 00:36:17.529486 env[1216]: time="2025-09-09T00:36:17.529482996Z" level=warning msg="cleaning up after shim disconnected" id=31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2 namespace=k8s.io Sep 9 00:36:17.529486 env[1216]: time="2025-09-09T00:36:17.529491678Z" level=info msg="cleaning up dead shim" Sep 9 00:36:17.536103 env[1216]: time="2025-09-09T00:36:17.536055603Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:36:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2389 runtime=io.containerd.runc.v2\n" Sep 9 00:36:18.310975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2-rootfs.mount: Deactivated successfully. Sep 9 00:36:18.420085 kubelet[1916]: E0909 00:36:18.420055 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:18.423136 env[1216]: time="2025-09-09T00:36:18.423084798Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:36:18.450066 env[1216]: time="2025-09-09T00:36:18.450022569Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47\"" Sep 9 00:36:18.452194 env[1216]: time="2025-09-09T00:36:18.452029751Z" level=info msg="StartContainer for \"a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47\"" Sep 9 00:36:18.468022 systemd[1]: Started cri-containerd-a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47.scope. Sep 9 00:36:18.496857 env[1216]: time="2025-09-09T00:36:18.496791782Z" level=info msg="StartContainer for \"a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47\" returns successfully" Sep 9 00:36:18.506525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:36:18.506771 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:36:18.506961 systemd[1]: Stopping systemd-sysctl.service... Sep 9 00:36:18.508567 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:36:18.511228 systemd[1]: cri-containerd-a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47.scope: Deactivated successfully. Sep 9 00:36:18.517336 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:36:18.529708 env[1216]: time="2025-09-09T00:36:18.529657238Z" level=info msg="shim disconnected" id=a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47 Sep 9 00:36:18.529708 env[1216]: time="2025-09-09T00:36:18.529704888Z" level=warning msg="cleaning up after shim disconnected" id=a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47 namespace=k8s.io Sep 9 00:36:18.529941 env[1216]: time="2025-09-09T00:36:18.529713969Z" level=info msg="cleaning up dead shim" Sep 9 00:36:18.535649 env[1216]: time="2025-09-09T00:36:18.535607206Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:36:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2451 runtime=io.containerd.runc.v2\n" Sep 9 00:36:19.425687 kubelet[1916]: E0909 00:36:19.423750 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:19.427623 env[1216]: time="2025-09-09T00:36:19.427587688Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:36:19.498701 env[1216]: time="2025-09-09T00:36:19.498580414Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707\"" Sep 9 00:36:19.499559 env[1216]: time="2025-09-09T00:36:19.499441586Z" level=info msg="StartContainer for \"8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707\"" Sep 9 00:36:19.528406 systemd[1]: Started cri-containerd-8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707.scope. Sep 9 00:36:19.564962 env[1216]: time="2025-09-09T00:36:19.564810387Z" level=info msg="StartContainer for \"8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707\" returns successfully" Sep 9 00:36:19.565379 systemd[1]: cri-containerd-8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707.scope: Deactivated successfully. Sep 9 00:36:19.596961 env[1216]: time="2025-09-09T00:36:19.596620232Z" level=info msg="shim disconnected" id=8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707 Sep 9 00:36:19.596961 env[1216]: time="2025-09-09T00:36:19.596738296Z" level=warning msg="cleaning up after shim disconnected" id=8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707 namespace=k8s.io Sep 9 00:36:19.596961 env[1216]: time="2025-09-09T00:36:19.596751458Z" level=info msg="cleaning up dead shim" Sep 9 00:36:19.603941 env[1216]: time="2025-09-09T00:36:19.603716252Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:36:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2507 runtime=io.containerd.runc.v2\n" Sep 9 00:36:20.313167 systemd[1]: run-containerd-runc-k8s.io-8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707-runc.9TySqt.mount: Deactivated successfully. Sep 9 00:36:20.313287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707-rootfs.mount: Deactivated successfully. Sep 9 00:36:20.430569 kubelet[1916]: E0909 00:36:20.429521 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:20.440941 env[1216]: time="2025-09-09T00:36:20.440876579Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:36:20.465327 env[1216]: time="2025-09-09T00:36:20.465181502Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b\"" Sep 9 00:36:20.466696 env[1216]: time="2025-09-09T00:36:20.465791098Z" level=info msg="StartContainer for \"8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b\"" Sep 9 00:36:20.486587 systemd[1]: Started cri-containerd-8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b.scope. Sep 9 00:36:20.517172 systemd[1]: cri-containerd-8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b.scope: Deactivated successfully. Sep 9 00:36:20.520985 env[1216]: time="2025-09-09T00:36:20.520902585Z" level=info msg="StartContainer for \"8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b\" returns successfully" Sep 9 00:36:20.541105 env[1216]: time="2025-09-09T00:36:20.541062756Z" level=info msg="shim disconnected" id=8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b Sep 9 00:36:20.541105 env[1216]: time="2025-09-09T00:36:20.541103563Z" level=warning msg="cleaning up after shim disconnected" id=8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b namespace=k8s.io Sep 9 00:36:20.541105 env[1216]: time="2025-09-09T00:36:20.541113565Z" level=info msg="cleaning up dead shim" Sep 9 00:36:20.547577 env[1216]: time="2025-09-09T00:36:20.547518789Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:36:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n" Sep 9 00:36:21.312969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b-rootfs.mount: Deactivated successfully. Sep 9 00:36:21.443524 kubelet[1916]: E0909 00:36:21.442152 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:21.449069 env[1216]: time="2025-09-09T00:36:21.449005208Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:36:21.482895 env[1216]: time="2025-09-09T00:36:21.482837141Z" level=info msg="CreateContainer within sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\"" Sep 9 00:36:21.484837 env[1216]: time="2025-09-09T00:36:21.484758412Z" level=info msg="StartContainer for \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\"" Sep 9 00:36:21.503512 systemd[1]: Started cri-containerd-2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f.scope. Sep 9 00:36:21.534456 env[1216]: time="2025-09-09T00:36:21.534351022Z" level=info msg="StartContainer for \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\" returns successfully" Sep 9 00:36:21.645157 kubelet[1916]: I0909 00:36:21.645067 1916 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:36:21.678603 systemd[1]: Created slice kubepods-burstable-podfdb51f1f_fadc_4efb_9718_c938e0890565.slice. Sep 9 00:36:21.681657 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:36:21.683603 systemd[1]: Created slice kubepods-burstable-pod7fb1877c_afd1_4a7a_a4c5_b64aaf3fd65b.slice. Sep 9 00:36:21.760874 kubelet[1916]: I0909 00:36:21.760822 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fb1877c-afd1-4a7a-a4c5-b64aaf3fd65b-config-volume\") pod \"coredns-7c65d6cfc9-l67zz\" (UID: \"7fb1877c-afd1-4a7a-a4c5-b64aaf3fd65b\") " pod="kube-system/coredns-7c65d6cfc9-l67zz" Sep 9 00:36:21.761100 kubelet[1916]: I0909 00:36:21.761072 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdpw7\" (UniqueName: \"kubernetes.io/projected/fdb51f1f-fadc-4efb-9718-c938e0890565-kube-api-access-xdpw7\") pod \"coredns-7c65d6cfc9-p84rk\" (UID: \"fdb51f1f-fadc-4efb-9718-c938e0890565\") " pod="kube-system/coredns-7c65d6cfc9-p84rk" Sep 9 00:36:21.761194 kubelet[1916]: I0909 00:36:21.761180 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2lj\" (UniqueName: \"kubernetes.io/projected/7fb1877c-afd1-4a7a-a4c5-b64aaf3fd65b-kube-api-access-xb2lj\") pod \"coredns-7c65d6cfc9-l67zz\" (UID: \"7fb1877c-afd1-4a7a-a4c5-b64aaf3fd65b\") " pod="kube-system/coredns-7c65d6cfc9-l67zz" Sep 9 00:36:21.761281 kubelet[1916]: I0909 00:36:21.761263 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdb51f1f-fadc-4efb-9718-c938e0890565-config-volume\") pod \"coredns-7c65d6cfc9-p84rk\" (UID: \"fdb51f1f-fadc-4efb-9718-c938e0890565\") " pod="kube-system/coredns-7c65d6cfc9-p84rk" Sep 9 00:36:21.927663 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:36:21.982455 kubelet[1916]: E0909 00:36:21.982412 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:21.983170 env[1216]: time="2025-09-09T00:36:21.983130714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p84rk,Uid:fdb51f1f-fadc-4efb-9718-c938e0890565,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:21.986349 kubelet[1916]: E0909 00:36:21.986323 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:21.986766 env[1216]: time="2025-09-09T00:36:21.986733611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l67zz,Uid:7fb1877c-afd1-4a7a-a4c5-b64aaf3fd65b,Namespace:kube-system,Attempt:0,}" Sep 9 00:36:22.446027 kubelet[1916]: E0909 00:36:22.445800 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:23.447446 kubelet[1916]: E0909 00:36:23.447412 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:23.544558 systemd-networkd[1046]: cilium_host: Link UP Sep 9 00:36:23.545114 systemd-networkd[1046]: cilium_net: Link UP Sep 9 00:36:23.547131 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 9 00:36:23.547197 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 9 00:36:23.546883 systemd-networkd[1046]: cilium_net: Gained carrier Sep 9 00:36:23.547058 systemd-networkd[1046]: cilium_host: Gained carrier Sep 9 00:36:23.635689 systemd-networkd[1046]: cilium_vxlan: Link UP Sep 9 00:36:23.635695 systemd-networkd[1046]: cilium_vxlan: Gained carrier Sep 9 00:36:23.881669 kernel: NET: Registered PF_ALG protocol family Sep 9 00:36:24.311768 systemd-networkd[1046]: cilium_net: Gained IPv6LL Sep 9 00:36:24.449364 kubelet[1916]: E0909 00:36:24.449317 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:24.478608 systemd-networkd[1046]: lxc_health: Link UP Sep 9 00:36:24.484839 systemd-networkd[1046]: lxc_health: Gained carrier Sep 9 00:36:24.485664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:36:24.568385 systemd-networkd[1046]: cilium_host: Gained IPv6LL Sep 9 00:36:24.588961 systemd-networkd[1046]: lxc3768fd0c104a: Link UP Sep 9 00:36:24.595662 kernel: eth0: renamed from tmp17fb7 Sep 9 00:36:24.603369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:36:24.603438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3768fd0c104a: link becomes ready Sep 9 00:36:24.603759 systemd-networkd[1046]: lxc3768fd0c104a: Gained carrier Sep 9 00:36:24.604104 systemd-networkd[1046]: lxceb06e40f7bf1: Link UP Sep 9 00:36:24.614663 kernel: eth0: renamed from tmp14876 Sep 9 00:36:24.625434 systemd-networkd[1046]: lxceb06e40f7bf1: Gained carrier Sep 9 00:36:24.625826 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceb06e40f7bf1: link becomes ready Sep 9 00:36:25.527798 systemd-networkd[1046]: cilium_vxlan: Gained IPv6LL Sep 9 00:36:26.226729 kubelet[1916]: E0909 00:36:26.226690 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:26.259731 kubelet[1916]: I0909 00:36:26.259671 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kmxxb" podStartSLOduration=11.249103603 podStartE2EDuration="20.259654442s" podCreationTimestamp="2025-09-09 00:36:06 +0000 UTC" firstStartedPulling="2025-09-09 00:36:08.289509927 +0000 UTC m=+9.064087443" lastFinishedPulling="2025-09-09 00:36:17.300060726 +0000 UTC m=+18.074638282" observedRunningTime="2025-09-09 00:36:22.469442987 +0000 UTC m=+23.244020543" watchObservedRunningTime="2025-09-09 00:36:26.259654442 +0000 UTC m=+27.034231998" Sep 9 00:36:26.296736 systemd-networkd[1046]: lxceb06e40f7bf1: Gained IPv6LL Sep 9 00:36:26.359816 systemd-networkd[1046]: lxc3768fd0c104a: Gained IPv6LL Sep 9 00:36:26.452917 kubelet[1916]: E0909 00:36:26.452885 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:26.551781 systemd-networkd[1046]: lxc_health: Gained IPv6LL Sep 9 00:36:27.454283 kubelet[1916]: E0909 00:36:27.454250 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:28.144845 env[1216]: time="2025-09-09T00:36:28.144767551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:28.145261 env[1216]: time="2025-09-09T00:36:28.144814718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:28.145261 env[1216]: time="2025-09-09T00:36:28.144827240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:28.145261 env[1216]: time="2025-09-09T00:36:28.144962138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/148761cda1d0707ce7727689a6e29408e275a8843229b0d0520996964cdef75b pid=3139 runtime=io.containerd.runc.v2 Sep 9 00:36:28.152897 env[1216]: time="2025-09-09T00:36:28.152811765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:36:28.152897 env[1216]: time="2025-09-09T00:36:28.152867493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:36:28.153113 env[1216]: time="2025-09-09T00:36:28.152878894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:36:28.158282 env[1216]: time="2025-09-09T00:36:28.153346398Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17fb7a72c9a3dcfda737bff7085ea818a446c3f70c8d5983ff1a49a971c568d9 pid=3161 runtime=io.containerd.runc.v2 Sep 9 00:36:28.161843 systemd[1]: Started cri-containerd-148761cda1d0707ce7727689a6e29408e275a8843229b0d0520996964cdef75b.scope. Sep 9 00:36:28.175657 systemd[1]: Started cri-containerd-17fb7a72c9a3dcfda737bff7085ea818a446c3f70c8d5983ff1a49a971c568d9.scope. Sep 9 00:36:28.189687 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:36:28.194936 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:36:28.211968 env[1216]: time="2025-09-09T00:36:28.211926803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p84rk,Uid:fdb51f1f-fadc-4efb-9718-c938e0890565,Namespace:kube-system,Attempt:0,} returns sandbox id \"148761cda1d0707ce7727689a6e29408e275a8843229b0d0520996964cdef75b\"" Sep 9 00:36:28.212777 kubelet[1916]: E0909 00:36:28.212713 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:28.214960 env[1216]: time="2025-09-09T00:36:28.214917570Z" level=info msg="CreateContainer within sandbox \"148761cda1d0707ce7727689a6e29408e275a8843229b0d0520996964cdef75b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:36:28.219966 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:52382.service. Sep 9 00:36:28.222919 env[1216]: time="2025-09-09T00:36:28.222821164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l67zz,Uid:7fb1877c-afd1-4a7a-a4c5-b64aaf3fd65b,Namespace:kube-system,Attempt:0,} returns sandbox id \"17fb7a72c9a3dcfda737bff7085ea818a446c3f70c8d5983ff1a49a971c568d9\"" Sep 9 00:36:28.228954 kubelet[1916]: E0909 00:36:28.228928 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:28.229037 env[1216]: time="2025-09-09T00:36:28.228945317Z" level=info msg="CreateContainer within sandbox \"148761cda1d0707ce7727689a6e29408e275a8843229b0d0520996964cdef75b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1032f9879fcffacb1517d64b2d89473898354c8f27a23758933022ff552d897f\"" Sep 9 00:36:28.229517 env[1216]: time="2025-09-09T00:36:28.229489071Z" level=info msg="StartContainer for \"1032f9879fcffacb1517d64b2d89473898354c8f27a23758933022ff552d897f\"" Sep 9 00:36:28.231325 env[1216]: time="2025-09-09T00:36:28.231283155Z" level=info msg="CreateContainer within sandbox \"17fb7a72c9a3dcfda737bff7085ea818a446c3f70c8d5983ff1a49a971c568d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:36:28.248156 systemd[1]: Started cri-containerd-1032f9879fcffacb1517d64b2d89473898354c8f27a23758933022ff552d897f.scope. Sep 9 00:36:28.248819 env[1216]: time="2025-09-09T00:36:28.248776254Z" level=info msg="CreateContainer within sandbox \"17fb7a72c9a3dcfda737bff7085ea818a446c3f70c8d5983ff1a49a971c568d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cba8cc0310108ef91cb07e8c1e23e875f96bbe8f566b78aa9a990a7aa0120ea0\"" Sep 9 00:36:28.252610 env[1216]: time="2025-09-09T00:36:28.252558048Z" level=info msg="StartContainer for \"cba8cc0310108ef91cb07e8c1e23e875f96bbe8f566b78aa9a990a7aa0120ea0\"" Sep 9 00:36:28.266909 sshd[3213]: Accepted publickey for core from 10.0.0.1 port 52382 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:28.268708 sshd[3213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:28.272403 systemd-logind[1206]: New session 6 of user core. Sep 9 00:36:28.273281 systemd[1]: Started session-6.scope. Sep 9 00:36:28.281116 systemd[1]: Started cri-containerd-cba8cc0310108ef91cb07e8c1e23e875f96bbe8f566b78aa9a990a7aa0120ea0.scope. Sep 9 00:36:28.290040 env[1216]: time="2025-09-09T00:36:28.289922768Z" level=info msg="StartContainer for \"1032f9879fcffacb1517d64b2d89473898354c8f27a23758933022ff552d897f\" returns successfully" Sep 9 00:36:28.314345 env[1216]: time="2025-09-09T00:36:28.314270279Z" level=info msg="StartContainer for \"cba8cc0310108ef91cb07e8c1e23e875f96bbe8f566b78aa9a990a7aa0120ea0\" returns successfully" Sep 9 00:36:28.406297 sshd[3213]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:28.408832 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:52382.service: Deactivated successfully. Sep 9 00:36:28.409701 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:36:28.410361 systemd-logind[1206]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:36:28.411062 systemd-logind[1206]: Removed session 6. Sep 9 00:36:28.457764 kubelet[1916]: E0909 00:36:28.457030 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:28.459698 kubelet[1916]: E0909 00:36:28.459443 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:28.467036 kubelet[1916]: I0909 00:36:28.466985 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-l67zz" podStartSLOduration=21.466971362 podStartE2EDuration="21.466971362s" podCreationTimestamp="2025-09-09 00:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:28.466006671 +0000 UTC m=+29.240584227" watchObservedRunningTime="2025-09-09 00:36:28.466971362 +0000 UTC m=+29.241548918" Sep 9 00:36:28.474437 kubelet[1916]: I0909 00:36:28.474384 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p84rk" podStartSLOduration=21.474368328 podStartE2EDuration="21.474368328s" podCreationTimestamp="2025-09-09 00:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:36:28.473930108 +0000 UTC m=+29.248507664" watchObservedRunningTime="2025-09-09 00:36:28.474368328 +0000 UTC m=+29.248945844" Sep 9 00:36:29.461339 kubelet[1916]: E0909 00:36:29.461309 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:29.462676 kubelet[1916]: E0909 00:36:29.461356 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:30.462698 kubelet[1916]: E0909 00:36:30.462667 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:30.463112 kubelet[1916]: E0909 00:36:30.462789 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:36:33.413008 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:49080.service. Sep 9 00:36:33.448005 sshd[3314]: Accepted publickey for core from 10.0.0.1 port 49080 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:33.449228 sshd[3314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:33.455038 systemd-logind[1206]: New session 7 of user core. Sep 9 00:36:33.456922 systemd[1]: Started session-7.scope. Sep 9 00:36:33.601721 sshd[3314]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:33.604892 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:49080.service: Deactivated successfully. Sep 9 00:36:33.605716 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:36:33.606400 systemd-logind[1206]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:36:33.607032 systemd-logind[1206]: Removed session 7. Sep 9 00:36:38.605929 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:49092.service. Sep 9 00:36:38.647779 sshd[3332]: Accepted publickey for core from 10.0.0.1 port 49092 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:38.649320 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:38.653790 systemd[1]: Started session-8.scope. Sep 9 00:36:38.654240 systemd-logind[1206]: New session 8 of user core. Sep 9 00:36:38.768561 sshd[3332]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:38.771251 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:49092.service: Deactivated successfully. Sep 9 00:36:38.771986 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:36:38.773021 systemd-logind[1206]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:36:38.773803 systemd-logind[1206]: Removed session 8. Sep 9 00:36:43.774158 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:33134.service. Sep 9 00:36:43.806132 sshd[3347]: Accepted publickey for core from 10.0.0.1 port 33134 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:43.807298 sshd[3347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:43.810660 systemd-logind[1206]: New session 9 of user core. Sep 9 00:36:43.811512 systemd[1]: Started session-9.scope. Sep 9 00:36:43.926504 sshd[3347]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:43.930319 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:33140.service. Sep 9 00:36:43.930860 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:33134.service: Deactivated successfully. Sep 9 00:36:43.931508 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:36:43.932042 systemd-logind[1206]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:36:43.932778 systemd-logind[1206]: Removed session 9. Sep 9 00:36:43.963442 sshd[3361]: Accepted publickey for core from 10.0.0.1 port 33140 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:43.964936 sshd[3361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:43.967970 systemd-logind[1206]: New session 10 of user core. Sep 9 00:36:43.968822 systemd[1]: Started session-10.scope. Sep 9 00:36:44.133153 sshd[3361]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:44.136412 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:33152.service. Sep 9 00:36:44.137402 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:33140.service: Deactivated successfully. Sep 9 00:36:44.138169 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:36:44.146892 systemd-logind[1206]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:36:44.149671 systemd-logind[1206]: Removed session 10. Sep 9 00:36:44.176826 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 33152 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:44.178113 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:44.181360 systemd-logind[1206]: New session 11 of user core. Sep 9 00:36:44.182245 systemd[1]: Started session-11.scope. Sep 9 00:36:44.309509 sshd[3374]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:44.312249 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:36:44.312850 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:33152.service: Deactivated successfully. Sep 9 00:36:44.313626 systemd-logind[1206]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:36:44.314241 systemd-logind[1206]: Removed session 11. Sep 9 00:36:49.314932 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:33154.service. Sep 9 00:36:49.361164 sshd[3389]: Accepted publickey for core from 10.0.0.1 port 33154 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:49.366966 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:49.375265 systemd-logind[1206]: New session 12 of user core. Sep 9 00:36:49.376408 systemd[1]: Started session-12.scope. Sep 9 00:36:49.510258 sshd[3389]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:49.512949 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:33154.service: Deactivated successfully. Sep 9 00:36:49.513762 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:36:49.514353 systemd-logind[1206]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:36:49.515094 systemd-logind[1206]: Removed session 12. Sep 9 00:36:54.518574 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:41696.service. Sep 9 00:36:54.556084 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 41696 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:54.557729 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:54.564193 systemd-logind[1206]: New session 13 of user core. Sep 9 00:36:54.564749 systemd[1]: Started session-13.scope. Sep 9 00:36:54.696188 sshd[3404]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:54.700655 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:41706.service. Sep 9 00:36:54.701238 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:41696.service: Deactivated successfully. Sep 9 00:36:54.702164 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:36:54.702766 systemd-logind[1206]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:36:54.703547 systemd-logind[1206]: Removed session 13. Sep 9 00:36:54.741485 sshd[3416]: Accepted publickey for core from 10.0.0.1 port 41706 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:54.742913 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:54.746673 systemd-logind[1206]: New session 14 of user core. Sep 9 00:36:54.747542 systemd[1]: Started session-14.scope. Sep 9 00:36:54.937179 sshd[3416]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:54.941224 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:41720.service. Sep 9 00:36:54.941882 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:41706.service: Deactivated successfully. Sep 9 00:36:54.942599 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:36:54.943307 systemd-logind[1206]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:36:54.944079 systemd-logind[1206]: Removed session 14. Sep 9 00:36:54.977744 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 41720 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:54.979815 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:54.985266 systemd-logind[1206]: New session 15 of user core. Sep 9 00:36:54.986083 systemd[1]: Started session-15.scope. Sep 9 00:36:56.237214 sshd[3428]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:56.241564 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:41732.service. Sep 9 00:36:56.242281 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:41720.service: Deactivated successfully. Sep 9 00:36:56.243543 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:36:56.245098 systemd-logind[1206]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:36:56.246109 systemd-logind[1206]: Removed session 15. Sep 9 00:36:56.290118 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 41732 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:56.291740 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:56.296288 systemd[1]: Started session-16.scope. Sep 9 00:36:56.296438 systemd-logind[1206]: New session 16 of user core. Sep 9 00:36:56.535989 sshd[3447]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:56.539828 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:41744.service. Sep 9 00:36:56.543516 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:41732.service: Deactivated successfully. Sep 9 00:36:56.544369 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:36:56.547972 systemd-logind[1206]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:36:56.549183 systemd-logind[1206]: Removed session 16. Sep 9 00:36:56.581997 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 41744 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:56.583853 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:56.589354 systemd[1]: Started session-17.scope. Sep 9 00:36:56.589741 systemd-logind[1206]: New session 17 of user core. Sep 9 00:36:56.714092 sshd[3461]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:56.717443 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:41744.service: Deactivated successfully. Sep 9 00:36:56.718314 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:36:56.719528 systemd-logind[1206]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:36:56.720312 systemd-logind[1206]: Removed session 17. Sep 9 00:37:01.719702 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:35446.service. Sep 9 00:37:01.751790 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 35446 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:01.753403 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:01.757594 systemd[1]: Started session-18.scope. Sep 9 00:37:01.757899 systemd-logind[1206]: New session 18 of user core. Sep 9 00:37:01.871185 sshd[3477]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:01.873552 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:35446.service: Deactivated successfully. Sep 9 00:37:01.874365 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:37:01.874863 systemd-logind[1206]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:37:01.875479 systemd-logind[1206]: Removed session 18. Sep 9 00:37:06.876287 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:35454.service. Sep 9 00:37:06.908047 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 35454 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:06.909248 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:06.913046 systemd-logind[1206]: New session 19 of user core. Sep 9 00:37:06.913499 systemd[1]: Started session-19.scope. Sep 9 00:37:07.019464 sshd[3493]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:07.021794 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:35454.service: Deactivated successfully. Sep 9 00:37:07.022605 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:37:07.023145 systemd-logind[1206]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:37:07.023887 systemd-logind[1206]: Removed session 19. Sep 9 00:37:12.025478 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:33824.service. Sep 9 00:37:12.058696 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 33824 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:12.059484 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:12.063778 systemd-logind[1206]: New session 20 of user core. Sep 9 00:37:12.065471 systemd[1]: Started session-20.scope. Sep 9 00:37:12.184006 sshd[3508]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:12.186466 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:33824.service: Deactivated successfully. Sep 9 00:37:12.187315 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:37:12.187814 systemd-logind[1206]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:37:12.188456 systemd-logind[1206]: Removed session 20. Sep 9 00:37:15.315615 kubelet[1916]: E0909 00:37:15.315577 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:17.188574 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:33838.service. Sep 9 00:37:17.225133 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 33838 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:17.226396 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:17.230570 systemd-logind[1206]: New session 21 of user core. Sep 9 00:37:17.231443 systemd[1]: Started session-21.scope. Sep 9 00:37:17.370904 sshd[3521]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:17.375334 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:33852.service. Sep 9 00:37:17.377089 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:33838.service: Deactivated successfully. Sep 9 00:37:17.378314 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:37:17.379132 systemd-logind[1206]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:37:17.380103 systemd-logind[1206]: Removed session 21. Sep 9 00:37:17.418649 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 33852 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:17.420267 sshd[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:17.424607 systemd-logind[1206]: New session 22 of user core. Sep 9 00:37:17.425491 systemd[1]: Started session-22.scope. Sep 9 00:37:19.320903 kubelet[1916]: E0909 00:37:19.320869 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:19.461358 systemd[1]: run-containerd-runc-k8s.io-2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f-runc.KhoRDW.mount: Deactivated successfully. Sep 9 00:37:19.463975 env[1216]: time="2025-09-09T00:37:19.463910386Z" level=info msg="StopContainer for \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\" with timeout 30 (s)" Sep 9 00:37:19.464782 env[1216]: time="2025-09-09T00:37:19.464752485Z" level=info msg="Stop container \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\" with signal terminated" Sep 9 00:37:19.475168 systemd[1]: cri-containerd-3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e.scope: Deactivated successfully. Sep 9 00:37:19.490031 env[1216]: time="2025-09-09T00:37:19.489945395Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:37:19.495939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e-rootfs.mount: Deactivated successfully. Sep 9 00:37:19.496347 env[1216]: time="2025-09-09T00:37:19.496120445Z" level=info msg="StopContainer for \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\" with timeout 2 (s)" Sep 9 00:37:19.496758 env[1216]: time="2025-09-09T00:37:19.496689512Z" level=info msg="Stop container \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\" with signal terminated" Sep 9 00:37:19.502690 systemd-networkd[1046]: lxc_health: Link DOWN Sep 9 00:37:19.502697 systemd-networkd[1046]: lxc_health: Lost carrier Sep 9 00:37:19.505472 env[1216]: time="2025-09-09T00:37:19.505428140Z" level=info msg="shim disconnected" id=3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e Sep 9 00:37:19.505472 env[1216]: time="2025-09-09T00:37:19.505477099Z" level=warning msg="cleaning up after shim disconnected" id=3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e namespace=k8s.io Sep 9 00:37:19.505655 env[1216]: time="2025-09-09T00:37:19.505487378Z" level=info msg="cleaning up dead shim" Sep 9 00:37:19.512222 env[1216]: time="2025-09-09T00:37:19.512170696Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3588 runtime=io.containerd.runc.v2\n" Sep 9 00:37:19.515917 env[1216]: time="2025-09-09T00:37:19.515872647Z" level=info msg="StopContainer for \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\" returns successfully" Sep 9 00:37:19.516623 env[1216]: time="2025-09-09T00:37:19.516578550Z" level=info msg="StopPodSandbox for \"4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631\"" Sep 9 00:37:19.516704 env[1216]: time="2025-09-09T00:37:19.516683027Z" level=info msg="Container to stop \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:37:19.518421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631-shm.mount: Deactivated successfully. Sep 9 00:37:19.525648 systemd[1]: cri-containerd-4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631.scope: Deactivated successfully. Sep 9 00:37:19.526966 systemd[1]: cri-containerd-2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f.scope: Deactivated successfully. Sep 9 00:37:19.527261 systemd[1]: cri-containerd-2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f.scope: Consumed 6.146s CPU time. Sep 9 00:37:19.558563 env[1216]: time="2025-09-09T00:37:19.558505814Z" level=info msg="shim disconnected" id=2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f Sep 9 00:37:19.558563 env[1216]: time="2025-09-09T00:37:19.558553173Z" level=warning msg="cleaning up after shim disconnected" id=2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f namespace=k8s.io Sep 9 00:37:19.558563 env[1216]: time="2025-09-09T00:37:19.558563692Z" level=info msg="cleaning up dead shim" Sep 9 00:37:19.558908 env[1216]: time="2025-09-09T00:37:19.558856365Z" level=info msg="shim disconnected" id=4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631 Sep 9 00:37:19.558908 env[1216]: time="2025-09-09T00:37:19.558901284Z" level=warning msg="cleaning up after shim disconnected" id=4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631 namespace=k8s.io Sep 9 00:37:19.559000 env[1216]: time="2025-09-09T00:37:19.558910684Z" level=info msg="cleaning up dead shim" Sep 9 00:37:19.566078 env[1216]: time="2025-09-09T00:37:19.566028072Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3636 runtime=io.containerd.runc.v2\n" Sep 9 00:37:19.566375 env[1216]: time="2025-09-09T00:37:19.566347704Z" level=info msg="TearDown network for sandbox \"4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631\" successfully" Sep 9 00:37:19.566411 env[1216]: time="2025-09-09T00:37:19.566378343Z" level=info msg="StopPodSandbox for \"4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631\" returns successfully" Sep 9 00:37:19.567122 env[1216]: time="2025-09-09T00:37:19.567097126Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3635 runtime=io.containerd.runc.v2\n" Sep 9 00:37:19.569401 env[1216]: time="2025-09-09T00:37:19.569364471Z" level=info msg="StopContainer for \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\" returns successfully" Sep 9 00:37:19.569772 env[1216]: time="2025-09-09T00:37:19.569746022Z" level=info msg="StopPodSandbox for \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\"" Sep 9 00:37:19.569815 env[1216]: time="2025-09-09T00:37:19.569801900Z" level=info msg="Container to stop \"8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:37:19.569855 env[1216]: time="2025-09-09T00:37:19.569817380Z" level=info msg="Container to stop \"a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:37:19.569855 env[1216]: time="2025-09-09T00:37:19.569828819Z" level=info msg="Container to stop \"31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:37:19.569855 env[1216]: time="2025-09-09T00:37:19.569842099Z" level=info msg="Container to stop \"8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:37:19.569855 env[1216]: time="2025-09-09T00:37:19.569852899Z" level=info msg="Container to stop \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:37:19.577953 kubelet[1916]: I0909 00:37:19.577528 1916 scope.go:117] "RemoveContainer" containerID="3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e" Sep 9 00:37:19.581953 systemd[1]: cri-containerd-af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810.scope: Deactivated successfully. Sep 9 00:37:19.584737 env[1216]: time="2025-09-09T00:37:19.584694499Z" level=info msg="RemoveContainer for \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\"" Sep 9 00:37:19.589289 env[1216]: time="2025-09-09T00:37:19.589183351Z" level=info msg="RemoveContainer for \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\" returns successfully" Sep 9 00:37:19.589478 kubelet[1916]: I0909 00:37:19.589432 1916 scope.go:117] "RemoveContainer" containerID="3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e" Sep 9 00:37:19.589746 env[1216]: time="2025-09-09T00:37:19.589677339Z" level=error msg="ContainerStatus for \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\": not found" Sep 9 00:37:19.591154 kubelet[1916]: E0909 00:37:19.591117 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\": not found" containerID="3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e" Sep 9 00:37:19.591255 kubelet[1916]: I0909 00:37:19.591168 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e"} err="failed to get container status \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f0c3d8fe1aa0baa9239f545469a89404c3b1e297f39b020549dd9f62997558e\": not found" Sep 9 00:37:19.610346 env[1216]: time="2025-09-09T00:37:19.610289639Z" level=info msg="shim disconnected" id=af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810 Sep 9 00:37:19.610346 env[1216]: time="2025-09-09T00:37:19.610340598Z" level=warning msg="cleaning up after shim disconnected" id=af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810 namespace=k8s.io Sep 9 00:37:19.610346 env[1216]: time="2025-09-09T00:37:19.610350518Z" level=info msg="cleaning up dead shim" Sep 9 00:37:19.617032 env[1216]: time="2025-09-09T00:37:19.616993117Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3677 runtime=io.containerd.runc.v2\n" Sep 9 00:37:19.617365 env[1216]: time="2025-09-09T00:37:19.617339388Z" level=info msg="TearDown network for sandbox \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" successfully" Sep 9 00:37:19.617409 env[1216]: time="2025-09-09T00:37:19.617366308Z" level=info msg="StopPodSandbox for \"af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810\" returns successfully" Sep 9 00:37:19.640051 kubelet[1916]: I0909 00:37:19.639997 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hubble-tls\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.640233 kubelet[1916]: I0909 00:37:19.640094 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-cgroup\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.640233 kubelet[1916]: I0909 00:37:19.640118 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-etc-cni-netd\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.640233 kubelet[1916]: I0909 00:37:19.640133 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-net\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.640233 kubelet[1916]: I0909 00:37:19.640149 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-kernel\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.640233 kubelet[1916]: I0909 00:37:19.640178 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwzq8\" (UniqueName: \"kubernetes.io/projected/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-kube-api-access-wwzq8\") pod \"5a436b0f-fc12-4c27-afb8-8f0f31fedab1\" (UID: \"5a436b0f-fc12-4c27-afb8-8f0f31fedab1\") " Sep 9 00:37:19.640233 kubelet[1916]: I0909 00:37:19.640200 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5t9\" (UniqueName: \"kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-kube-api-access-pp5t9\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.640418 kubelet[1916]: I0909 00:37:19.640216 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hostproc\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.640985 kubelet[1916]: I0909 00:37:19.640235 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-config-path\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.641046 kubelet[1916]: I0909 00:37:19.641000 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-run\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.641046 kubelet[1916]: I0909 00:37:19.641021 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-cilium-config-path\") pod \"5a436b0f-fc12-4c27-afb8-8f0f31fedab1\" (UID: \"5a436b0f-fc12-4c27-afb8-8f0f31fedab1\") " Sep 9 00:37:19.641046 kubelet[1916]: I0909 00:37:19.641039 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-clustermesh-secrets\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.641124 kubelet[1916]: I0909 00:37:19.641055 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-xtables-lock\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.641862 kubelet[1916]: I0909 00:37:19.641838 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.642656 kubelet[1916]: I0909 00:37:19.641925 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.642656 kubelet[1916]: I0909 00:37:19.642002 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cni-path\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.642656 kubelet[1916]: I0909 00:37:19.642033 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-lib-modules\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.642656 kubelet[1916]: I0909 00:37:19.642051 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-bpf-maps\") pod \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\" (UID: \"aa63bef4-e7f6-4f50-aa14-8a7a52305c96\") " Sep 9 00:37:19.642656 kubelet[1916]: I0909 00:37:19.642087 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.642656 kubelet[1916]: I0909 00:37:19.642098 1916 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.642858 kubelet[1916]: I0909 00:37:19.642127 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.642858 kubelet[1916]: I0909 00:37:19.642146 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.642858 kubelet[1916]: I0909 00:37:19.642612 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.642858 kubelet[1916]: I0909 00:37:19.642671 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.642858 kubelet[1916]: I0909 00:37:19.642696 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.647435 kubelet[1916]: I0909 00:37:19.647401 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cni-path" (OuterVolumeSpecName: "cni-path") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.647787 kubelet[1916]: I0909 00:37:19.647760 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a436b0f-fc12-4c27-afb8-8f0f31fedab1" (UID: "5a436b0f-fc12-4c27-afb8-8f0f31fedab1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:37:19.648239 kubelet[1916]: I0909 00:37:19.647456 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.648463 kubelet[1916]: I0909 00:37:19.647276 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hostproc" (OuterVolumeSpecName: "hostproc") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:19.648877 kubelet[1916]: I0909 00:37:19.648848 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:37:19.649211 kubelet[1916]: I0909 00:37:19.649175 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:37:19.649720 kubelet[1916]: I0909 00:37:19.649683 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:37:19.650980 kubelet[1916]: I0909 00:37:19.650934 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-kube-api-access-pp5t9" (OuterVolumeSpecName: "kube-api-access-pp5t9") pod "aa63bef4-e7f6-4f50-aa14-8a7a52305c96" (UID: "aa63bef4-e7f6-4f50-aa14-8a7a52305c96"). InnerVolumeSpecName "kube-api-access-pp5t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:37:19.651234 kubelet[1916]: I0909 00:37:19.651207 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-kube-api-access-wwzq8" (OuterVolumeSpecName: "kube-api-access-wwzq8") pod "5a436b0f-fc12-4c27-afb8-8f0f31fedab1" (UID: "5a436b0f-fc12-4c27-afb8-8f0f31fedab1"). InnerVolumeSpecName "kube-api-access-wwzq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:37:19.742665 kubelet[1916]: I0909 00:37:19.742597 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.742860 kubelet[1916]: I0909 00:37:19.742842 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.742925 kubelet[1916]: I0909 00:37:19.742914 1916 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743013 kubelet[1916]: I0909 00:37:19.742999 1916 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743079 kubelet[1916]: I0909 00:37:19.743068 1916 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743151 kubelet[1916]: I0909 00:37:19.743139 1916 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743219 kubelet[1916]: I0909 00:37:19.743208 1916 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743280 kubelet[1916]: I0909 00:37:19.743269 1916 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743338 kubelet[1916]: I0909 00:37:19.743327 1916 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743397 kubelet[1916]: I0909 00:37:19.743387 1916 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743457 kubelet[1916]: I0909 00:37:19.743447 1916 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwzq8\" (UniqueName: \"kubernetes.io/projected/5a436b0f-fc12-4c27-afb8-8f0f31fedab1-kube-api-access-wwzq8\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743515 kubelet[1916]: I0909 00:37:19.743504 1916 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5t9\" (UniqueName: \"kubernetes.io/projected/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-kube-api-access-pp5t9\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743573 kubelet[1916]: I0909 00:37:19.743563 1916 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.743628 kubelet[1916]: I0909 00:37:19.743618 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa63bef4-e7f6-4f50-aa14-8a7a52305c96-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:19.882067 systemd[1]: Removed slice kubepods-besteffort-pod5a436b0f_fc12_4c27_afb8_8f0f31fedab1.slice. Sep 9 00:37:20.316142 kubelet[1916]: E0909 00:37:20.316022 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:20.454257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f-rootfs.mount: Deactivated successfully. Sep 9 00:37:20.454357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810-rootfs.mount: Deactivated successfully. Sep 9 00:37:20.454417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af676e9b264b06a3c69031f3b1b158c126162c76b541badfde201a00eefe0810-shm.mount: Deactivated successfully. Sep 9 00:37:20.454484 systemd[1]: var-lib-kubelet-pods-aa63bef4\x2de7f6\x2d4f50\x2daa14\x2d8a7a52305c96-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:37:20.454538 systemd[1]: var-lib-kubelet-pods-aa63bef4\x2de7f6\x2d4f50\x2daa14\x2d8a7a52305c96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpp5t9.mount: Deactivated successfully. Sep 9 00:37:20.454590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4887863f1277484558c21563b6b9378d714490d637d93594c02bb07f578dc631-rootfs.mount: Deactivated successfully. Sep 9 00:37:20.454655 systemd[1]: var-lib-kubelet-pods-5a436b0f\x2dfc12\x2d4c27\x2dafb8\x2d8f0f31fedab1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwzq8.mount: Deactivated successfully. Sep 9 00:37:20.454710 systemd[1]: var-lib-kubelet-pods-aa63bef4\x2de7f6\x2d4f50\x2daa14\x2d8a7a52305c96-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:37:20.581841 kubelet[1916]: I0909 00:37:20.581810 1916 scope.go:117] "RemoveContainer" containerID="2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f" Sep 9 00:37:20.584086 env[1216]: time="2025-09-09T00:37:20.584048697Z" level=info msg="RemoveContainer for \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\"" Sep 9 00:37:20.587729 systemd[1]: Removed slice kubepods-burstable-podaa63bef4_e7f6_4f50_aa14_8a7a52305c96.slice. Sep 9 00:37:20.587810 systemd[1]: kubepods-burstable-podaa63bef4_e7f6_4f50_aa14_8a7a52305c96.slice: Consumed 6.267s CPU time. Sep 9 00:37:20.588429 env[1216]: time="2025-09-09T00:37:20.588400720Z" level=info msg="RemoveContainer for \"2521f2b93ada08ee756db75f8ee4a17c87adeb3aa467dbc0484d445ceffc726f\" returns successfully" Sep 9 00:37:20.589044 kubelet[1916]: I0909 00:37:20.588651 1916 scope.go:117] "RemoveContainer" containerID="8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b" Sep 9 00:37:20.590963 env[1216]: time="2025-09-09T00:37:20.590921944Z" level=info msg="RemoveContainer for \"8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b\"" Sep 9 00:37:20.595601 env[1216]: time="2025-09-09T00:37:20.595558282Z" level=info msg="RemoveContainer for \"8358e774308dd07525c3f094a9a5af8096b86b4dfed9be7c682c28a4c4fbba7b\" returns successfully" Sep 9 00:37:20.595790 kubelet[1916]: I0909 00:37:20.595743 1916 scope.go:117] "RemoveContainer" containerID="8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707" Sep 9 00:37:20.597528 env[1216]: time="2025-09-09T00:37:20.597484039Z" level=info msg="RemoveContainer for \"8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707\"" Sep 9 00:37:20.602718 env[1216]: time="2025-09-09T00:37:20.602681964Z" level=info msg="RemoveContainer for \"8b86d8476abcc9d68e27242b7334e423707239b3d179be40ce7a4b87f6184707\" returns successfully" Sep 9 00:37:20.602962 kubelet[1916]: I0909 00:37:20.602911 1916 scope.go:117] "RemoveContainer" containerID="a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47" Sep 9 00:37:20.604493 env[1216]: time="2025-09-09T00:37:20.604467044Z" level=info msg="RemoveContainer for \"a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47\"" Sep 9 00:37:20.606946 env[1216]: time="2025-09-09T00:37:20.606908750Z" level=info msg="RemoveContainer for \"a9766768ecd43e0a9eabc95ee1fec43d097b540a8c3829e9d20741468eaa9c47\" returns successfully" Sep 9 00:37:20.607162 kubelet[1916]: I0909 00:37:20.607086 1916 scope.go:117] "RemoveContainer" containerID="31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2" Sep 9 00:37:20.608161 env[1216]: time="2025-09-09T00:37:20.608128123Z" level=info msg="RemoveContainer for \"31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2\"" Sep 9 00:37:20.610371 env[1216]: time="2025-09-09T00:37:20.610333994Z" level=info msg="RemoveContainer for \"31663b825668fbaff225441c1f5c1b3afab5196517ab1404e4d63e59f61ae7c2\" returns successfully" Sep 9 00:37:21.317253 kubelet[1916]: I0909 00:37:21.317207 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a436b0f-fc12-4c27-afb8-8f0f31fedab1" path="/var/lib/kubelet/pods/5a436b0f-fc12-4c27-afb8-8f0f31fedab1/volumes" Sep 9 00:37:21.317627 kubelet[1916]: I0909 00:37:21.317609 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa63bef4-e7f6-4f50-aa14-8a7a52305c96" path="/var/lib/kubelet/pods/aa63bef4-e7f6-4f50-aa14-8a7a52305c96/volumes" Sep 9 00:37:21.398619 sshd[3533]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:21.401661 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:60390.service. Sep 9 00:37:21.402209 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:33852.service: Deactivated successfully. Sep 9 00:37:21.403120 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:37:21.403307 systemd[1]: session-22.scope: Consumed 1.286s CPU time. Sep 9 00:37:21.403774 systemd-logind[1206]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:37:21.404913 systemd-logind[1206]: Removed session 22. Sep 9 00:37:21.436399 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 60390 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:21.437979 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:21.441196 systemd-logind[1206]: New session 23 of user core. Sep 9 00:37:21.442071 systemd[1]: Started session-23.scope. Sep 9 00:37:22.197570 sshd[3697]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:22.203604 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:60400.service. Sep 9 00:37:22.207121 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:60390.service: Deactivated successfully. Sep 9 00:37:22.207895 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:37:22.208587 systemd-logind[1206]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:37:22.211912 systemd-logind[1206]: Removed session 23. Sep 9 00:37:22.228726 kubelet[1916]: E0909 00:37:22.226099 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a436b0f-fc12-4c27-afb8-8f0f31fedab1" containerName="cilium-operator" Sep 9 00:37:22.228726 kubelet[1916]: E0909 00:37:22.226146 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa63bef4-e7f6-4f50-aa14-8a7a52305c96" containerName="mount-bpf-fs" Sep 9 00:37:22.228726 kubelet[1916]: E0909 00:37:22.226152 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa63bef4-e7f6-4f50-aa14-8a7a52305c96" containerName="cilium-agent" Sep 9 00:37:22.228726 kubelet[1916]: E0909 00:37:22.226158 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa63bef4-e7f6-4f50-aa14-8a7a52305c96" containerName="mount-cgroup" Sep 9 00:37:22.228726 kubelet[1916]: E0909 00:37:22.226163 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa63bef4-e7f6-4f50-aa14-8a7a52305c96" containerName="apply-sysctl-overwrites" Sep 9 00:37:22.228726 kubelet[1916]: E0909 00:37:22.226169 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa63bef4-e7f6-4f50-aa14-8a7a52305c96" containerName="clean-cilium-state" Sep 9 00:37:22.228726 kubelet[1916]: I0909 00:37:22.226206 1916 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a436b0f-fc12-4c27-afb8-8f0f31fedab1" containerName="cilium-operator" Sep 9 00:37:22.228726 kubelet[1916]: I0909 00:37:22.226217 1916 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa63bef4-e7f6-4f50-aa14-8a7a52305c96" containerName="cilium-agent" Sep 9 00:37:22.237837 systemd[1]: Created slice kubepods-burstable-pod47f5d01d_0009_4ff3_baa7_52d16707b4f3.slice. Sep 9 00:37:22.251284 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 60400 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:22.252874 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:22.256545 systemd-logind[1206]: New session 24 of user core. Sep 9 00:37:22.257451 systemd[1]: Started session-24.scope. Sep 9 00:37:22.260056 kubelet[1916]: I0909 00:37:22.260018 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hostproc\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260240 kubelet[1916]: I0909 00:37:22.260169 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-etc-cni-netd\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260285 kubelet[1916]: I0909 00:37:22.260254 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cni-path\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260285 kubelet[1916]: I0909 00:37:22.260277 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-net\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260342 kubelet[1916]: I0909 00:37:22.260295 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbzjd\" (UniqueName: \"kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-kube-api-access-fbzjd\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260342 kubelet[1916]: I0909 00:37:22.260313 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-bpf-maps\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260342 kubelet[1916]: I0909 00:37:22.260330 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-clustermesh-secrets\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260408 kubelet[1916]: I0909 00:37:22.260346 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-run\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260408 kubelet[1916]: I0909 00:37:22.260367 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-cgroup\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260408 kubelet[1916]: I0909 00:37:22.260383 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-xtables-lock\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260408 kubelet[1916]: I0909 00:37:22.260399 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-ipsec-secrets\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260840 kubelet[1916]: I0909 00:37:22.260414 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-kernel\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260840 kubelet[1916]: I0909 00:37:22.260513 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-config-path\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260840 kubelet[1916]: I0909 00:37:22.260540 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hubble-tls\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.260840 kubelet[1916]: I0909 00:37:22.260684 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-lib-modules\") pod \"cilium-pnf7n\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " pod="kube-system/cilium-pnf7n" Sep 9 00:37:22.397527 sshd[3709]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:22.402244 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:60402.service. Sep 9 00:37:22.403562 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:60400.service: Deactivated successfully. Sep 9 00:37:22.404301 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:37:22.409824 systemd-logind[1206]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:37:22.413738 systemd-logind[1206]: Removed session 24. Sep 9 00:37:22.414698 kubelet[1916]: E0909 00:37:22.414666 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:22.415583 env[1216]: time="2025-09-09T00:37:22.415189751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pnf7n,Uid:47f5d01d-0009-4ff3-baa7-52d16707b4f3,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:22.443751 env[1216]: time="2025-09-09T00:37:22.443483956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:22.443751 env[1216]: time="2025-09-09T00:37:22.443525716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:22.443751 env[1216]: time="2025-09-09T00:37:22.443543875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:22.443974 env[1216]: time="2025-09-09T00:37:22.443765911Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6 pid=3737 runtime=io.containerd.runc.v2 Sep 9 00:37:22.450468 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 60402 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:22.454145 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:22.461340 systemd-logind[1206]: New session 25 of user core. Sep 9 00:37:22.463784 systemd[1]: Started cri-containerd-bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6.scope. Sep 9 00:37:22.464438 systemd[1]: Started session-25.scope. Sep 9 00:37:22.500074 env[1216]: time="2025-09-09T00:37:22.500023009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pnf7n,Uid:47f5d01d-0009-4ff3-baa7-52d16707b4f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6\"" Sep 9 00:37:22.500690 kubelet[1916]: E0909 00:37:22.500666 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:22.502665 env[1216]: time="2025-09-09T00:37:22.502594642Z" level=info msg="CreateContainer within sandbox \"bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:37:22.513400 env[1216]: time="2025-09-09T00:37:22.513340606Z" level=info msg="CreateContainer within sandbox \"bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\"" Sep 9 00:37:22.514981 env[1216]: time="2025-09-09T00:37:22.513931276Z" level=info msg="StartContainer for \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\"" Sep 9 00:37:22.532921 systemd[1]: Started cri-containerd-9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815.scope. Sep 9 00:37:22.549702 systemd[1]: cri-containerd-9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815.scope: Deactivated successfully. Sep 9 00:37:22.569609 env[1216]: time="2025-09-09T00:37:22.569550105Z" level=info msg="shim disconnected" id=9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815 Sep 9 00:37:22.569609 env[1216]: time="2025-09-09T00:37:22.569604384Z" level=warning msg="cleaning up after shim disconnected" id=9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815 namespace=k8s.io Sep 9 00:37:22.569609 env[1216]: time="2025-09-09T00:37:22.569615223Z" level=info msg="cleaning up dead shim" Sep 9 00:37:22.578765 env[1216]: time="2025-09-09T00:37:22.578709618Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3802 runtime=io.containerd.runc.v2\ntime=\"2025-09-09T00:37:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 9 00:37:22.579128 env[1216]: time="2025-09-09T00:37:22.579017412Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Sep 9 00:37:22.579318 env[1216]: time="2025-09-09T00:37:22.579276488Z" level=error msg="Failed to pipe stdout of container \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\"" error="reading from a closed fifo" Sep 9 00:37:22.579361 env[1216]: time="2025-09-09T00:37:22.579291168Z" level=error msg="Failed to pipe stderr of container \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\"" error="reading from a closed fifo" Sep 9 00:37:22.581478 env[1216]: time="2025-09-09T00:37:22.581407889Z" level=error msg="StartContainer for \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 9 00:37:22.582350 kubelet[1916]: E0909 00:37:22.581748 1916 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815" Sep 9 00:37:22.582350 kubelet[1916]: E0909 00:37:22.582079 1916 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 9 00:37:22.582350 kubelet[1916]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 9 00:37:22.582350 kubelet[1916]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 9 00:37:22.582350 kubelet[1916]: rm /hostbin/cilium-mount Sep 9 00:37:22.582681 kubelet[1916]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbzjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pnf7n_kube-system(47f5d01d-0009-4ff3-baa7-52d16707b4f3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 9 00:37:22.582681 kubelet[1916]: > logger="UnhandledError" Sep 9 00:37:22.583476 kubelet[1916]: E0909 00:37:22.583399 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pnf7n" podUID="47f5d01d-0009-4ff3-baa7-52d16707b4f3" Sep 9 00:37:22.589793 env[1216]: time="2025-09-09T00:37:22.589762497Z" level=info msg="StopPodSandbox for \"bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6\"" Sep 9 00:37:22.590004 env[1216]: time="2025-09-09T00:37:22.589964093Z" level=info msg="Container to stop \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:37:22.605234 systemd[1]: cri-containerd-bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6.scope: Deactivated successfully. Sep 9 00:37:22.631187 env[1216]: time="2025-09-09T00:37:22.631117865Z" level=info msg="shim disconnected" id=bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6 Sep 9 00:37:22.631510 env[1216]: time="2025-09-09T00:37:22.631488179Z" level=warning msg="cleaning up after shim disconnected" id=bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6 namespace=k8s.io Sep 9 00:37:22.631614 env[1216]: time="2025-09-09T00:37:22.631586297Z" level=info msg="cleaning up dead shim" Sep 9 00:37:22.638831 env[1216]: time="2025-09-09T00:37:22.638786606Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3833 runtime=io.containerd.runc.v2\n" Sep 9 00:37:22.639331 env[1216]: time="2025-09-09T00:37:22.639265317Z" level=info msg="TearDown network for sandbox \"bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6\" successfully" Sep 9 00:37:22.639435 env[1216]: time="2025-09-09T00:37:22.639415434Z" level=info msg="StopPodSandbox for \"bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6\" returns successfully" Sep 9 00:37:22.669051 kubelet[1916]: I0909 00:37:22.669010 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-ipsec-secrets\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669051 kubelet[1916]: I0909 00:37:22.669047 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-kernel\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669279 kubelet[1916]: I0909 00:37:22.669063 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hostproc\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669279 kubelet[1916]: I0909 00:37:22.669079 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cni-path\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669279 kubelet[1916]: I0909 00:37:22.669095 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-lib-modules\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669279 kubelet[1916]: I0909 00:37:22.669114 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbzjd\" (UniqueName: \"kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-kube-api-access-fbzjd\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669279 kubelet[1916]: I0909 00:37:22.669132 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-config-path\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669279 kubelet[1916]: I0909 00:37:22.669149 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-bpf-maps\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669425 kubelet[1916]: I0909 00:37:22.669162 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-etc-cni-netd\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669425 kubelet[1916]: I0909 00:37:22.669179 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-clustermesh-secrets\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669425 kubelet[1916]: I0909 00:37:22.669192 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-run\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669425 kubelet[1916]: I0909 00:37:22.669210 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-cgroup\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669425 kubelet[1916]: I0909 00:37:22.669224 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-xtables-lock\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669425 kubelet[1916]: I0909 00:37:22.669240 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hubble-tls\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669552 kubelet[1916]: I0909 00:37:22.669255 1916 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-net\") pod \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\" (UID: \"47f5d01d-0009-4ff3-baa7-52d16707b4f3\") " Sep 9 00:37:22.669552 kubelet[1916]: I0909 00:37:22.669339 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.669751 kubelet[1916]: I0909 00:37:22.669727 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.669891 kubelet[1916]: I0909 00:37:22.669862 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.672461 kubelet[1916]: I0909 00:37:22.669986 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.672616 kubelet[1916]: I0909 00:37:22.670004 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.672739 kubelet[1916]: I0909 00:37:22.670014 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.672918 kubelet[1916]: I0909 00:37:22.672886 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:37:22.672918 kubelet[1916]: I0909 00:37:22.670021 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.672918 kubelet[1916]: I0909 00:37:22.670031 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.672918 kubelet[1916]: I0909 00:37:22.670034 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.672918 kubelet[1916]: I0909 00:37:22.670966 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:37:22.673145 kubelet[1916]: I0909 00:37:22.672389 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:37:22.673649 kubelet[1916]: I0909 00:37:22.673614 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-kube-api-access-fbzjd" (OuterVolumeSpecName: "kube-api-access-fbzjd") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "kube-api-access-fbzjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:37:22.673717 kubelet[1916]: I0909 00:37:22.673656 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:37:22.674717 kubelet[1916]: I0909 00:37:22.674689 1916 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "47f5d01d-0009-4ff3-baa7-52d16707b4f3" (UID: "47f5d01d-0009-4ff3-baa7-52d16707b4f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769700 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769730 1916 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769742 1916 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769750 1916 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769758 1916 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769766 1916 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbzjd\" (UniqueName: \"kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-kube-api-access-fbzjd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769775 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.769812 kubelet[1916]: I0909 00:37:22.769782 1916 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.770172 kubelet[1916]: I0909 00:37:22.769792 1916 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.770535 kubelet[1916]: I0909 00:37:22.770514 1916 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47f5d01d-0009-4ff3-baa7-52d16707b4f3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.770574 kubelet[1916]: I0909 00:37:22.770544 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.770574 kubelet[1916]: I0909 00:37:22.770555 1916 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.770574 kubelet[1916]: I0909 00:37:22.770564 1916 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.770574 kubelet[1916]: I0909 00:37:22.770574 1916 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47f5d01d-0009-4ff3-baa7-52d16707b4f3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:22.770706 kubelet[1916]: I0909 00:37:22.770583 1916 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47f5d01d-0009-4ff3-baa7-52d16707b4f3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:37:23.316160 kubelet[1916]: E0909 00:37:23.316057 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:23.322401 systemd[1]: Removed slice kubepods-burstable-pod47f5d01d_0009_4ff3_baa7_52d16707b4f3.slice. Sep 9 00:37:23.366279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc8cfb7d3c4e64e8bd2d04f0138f5985ba74b1d184e99efa4136fa0213dab3e6-shm.mount: Deactivated successfully. Sep 9 00:37:23.366380 systemd[1]: var-lib-kubelet-pods-47f5d01d\x2d0009\x2d4ff3\x2dbaa7\x2d52d16707b4f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:37:23.366436 systemd[1]: var-lib-kubelet-pods-47f5d01d\x2d0009\x2d4ff3\x2dbaa7\x2d52d16707b4f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfbzjd.mount: Deactivated successfully. Sep 9 00:37:23.366496 systemd[1]: var-lib-kubelet-pods-47f5d01d\x2d0009\x2d4ff3\x2dbaa7\x2d52d16707b4f3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 9 00:37:23.366548 systemd[1]: var-lib-kubelet-pods-47f5d01d\x2d0009\x2d4ff3\x2dbaa7\x2d52d16707b4f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:37:23.592375 kubelet[1916]: I0909 00:37:23.592346 1916 scope.go:117] "RemoveContainer" containerID="9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815" Sep 9 00:37:23.593992 env[1216]: time="2025-09-09T00:37:23.593661926Z" level=info msg="RemoveContainer for \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\"" Sep 9 00:37:23.612131 env[1216]: time="2025-09-09T00:37:23.612012227Z" level=info msg="RemoveContainer for \"9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815\" returns successfully" Sep 9 00:37:23.650682 kubelet[1916]: E0909 00:37:23.646872 1916 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47f5d01d-0009-4ff3-baa7-52d16707b4f3" containerName="mount-cgroup" Sep 9 00:37:23.650682 kubelet[1916]: I0909 00:37:23.646927 1916 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f5d01d-0009-4ff3-baa7-52d16707b4f3" containerName="mount-cgroup" Sep 9 00:37:23.656663 systemd[1]: Created slice kubepods-burstable-podd67448ef_0ab5_47c3_b999_9290a7205baf.slice. Sep 9 00:37:23.675499 kubelet[1916]: I0909 00:37:23.675446 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-hostproc\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675499 kubelet[1916]: I0909 00:37:23.675495 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-etc-cni-netd\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675693 kubelet[1916]: I0909 00:37:23.675515 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d67448ef-0ab5-47c3-b999-9290a7205baf-cilium-config-path\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675693 kubelet[1916]: I0909 00:37:23.675548 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d67448ef-0ab5-47c3-b999-9290a7205baf-cilium-ipsec-secrets\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675693 kubelet[1916]: I0909 00:37:23.675567 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-host-proc-sys-net\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675693 kubelet[1916]: I0909 00:37:23.675583 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d67448ef-0ab5-47c3-b999-9290a7205baf-hubble-tls\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675693 kubelet[1916]: I0909 00:37:23.675599 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d67448ef-0ab5-47c3-b999-9290a7205baf-clustermesh-secrets\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675813 kubelet[1916]: I0909 00:37:23.675624 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-xtables-lock\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675813 kubelet[1916]: I0909 00:37:23.675648 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-lib-modules\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675813 kubelet[1916]: I0909 00:37:23.675665 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-bpf-maps\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675813 kubelet[1916]: I0909 00:37:23.675683 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-cilium-cgroup\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675813 kubelet[1916]: I0909 00:37:23.675708 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8g6b\" (UniqueName: \"kubernetes.io/projected/d67448ef-0ab5-47c3-b999-9290a7205baf-kube-api-access-c8g6b\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675813 kubelet[1916]: I0909 00:37:23.675726 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-cilium-run\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675973 kubelet[1916]: I0909 00:37:23.675753 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-cni-path\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.675973 kubelet[1916]: I0909 00:37:23.675777 1916 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d67448ef-0ab5-47c3-b999-9290a7205baf-host-proc-sys-kernel\") pod \"cilium-rddrn\" (UID: \"d67448ef-0ab5-47c3-b999-9290a7205baf\") " pod="kube-system/cilium-rddrn" Sep 9 00:37:23.959153 kubelet[1916]: E0909 00:37:23.959038 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:23.960226 env[1216]: time="2025-09-09T00:37:23.959878320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rddrn,Uid:d67448ef-0ab5-47c3-b999-9290a7205baf,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:23.973105 env[1216]: time="2025-09-09T00:37:23.972064522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:23.973105 env[1216]: time="2025-09-09T00:37:23.972107561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:23.973105 env[1216]: time="2025-09-09T00:37:23.972117721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:23.973105 env[1216]: time="2025-09-09T00:37:23.972267439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a pid=3862 runtime=io.containerd.runc.v2 Sep 9 00:37:23.989924 systemd[1]: Started cri-containerd-0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a.scope. Sep 9 00:37:24.022083 env[1216]: time="2025-09-09T00:37:24.022035425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rddrn,Uid:d67448ef-0ab5-47c3-b999-9290a7205baf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\"" Sep 9 00:37:24.023094 kubelet[1916]: E0909 00:37:24.022708 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:24.024875 env[1216]: time="2025-09-09T00:37:24.024439151Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:37:24.034361 env[1216]: time="2025-09-09T00:37:24.034305928Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0\"" Sep 9 00:37:24.036236 env[1216]: time="2025-09-09T00:37:24.034988238Z" level=info msg="StartContainer for \"abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0\"" Sep 9 00:37:24.048833 systemd[1]: Started cri-containerd-abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0.scope. Sep 9 00:37:24.086808 systemd[1]: cri-containerd-abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0.scope: Deactivated successfully. Sep 9 00:37:24.092349 env[1216]: time="2025-09-09T00:37:24.092248170Z" level=info msg="StartContainer for \"abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0\" returns successfully" Sep 9 00:37:24.118713 env[1216]: time="2025-09-09T00:37:24.118654789Z" level=info msg="shim disconnected" id=abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0 Sep 9 00:37:24.118713 env[1216]: time="2025-09-09T00:37:24.118710468Z" level=warning msg="cleaning up after shim disconnected" id=abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0 namespace=k8s.io Sep 9 00:37:24.118713 env[1216]: time="2025-09-09T00:37:24.118719668Z" level=info msg="cleaning up dead shim" Sep 9 00:37:24.126128 env[1216]: time="2025-09-09T00:37:24.126078561Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3945 runtime=io.containerd.runc.v2\n" Sep 9 00:37:24.374267 kubelet[1916]: E0909 00:37:24.374227 1916 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:37:24.596182 kubelet[1916]: E0909 00:37:24.596152 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:24.599890 env[1216]: time="2025-09-09T00:37:24.599833793Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:37:24.611711 env[1216]: time="2025-09-09T00:37:24.609858848Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96\"" Sep 9 00:37:24.612037 env[1216]: time="2025-09-09T00:37:24.611862299Z" level=info msg="StartContainer for \"b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96\"" Sep 9 00:37:24.643575 systemd[1]: Started cri-containerd-b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96.scope. Sep 9 00:37:24.670730 env[1216]: time="2025-09-09T00:37:24.670590370Z" level=info msg="StartContainer for \"b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96\" returns successfully" Sep 9 00:37:24.678172 systemd[1]: cri-containerd-b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96.scope: Deactivated successfully. Sep 9 00:37:24.704738 env[1216]: time="2025-09-09T00:37:24.704688277Z" level=info msg="shim disconnected" id=b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96 Sep 9 00:37:24.704993 env[1216]: time="2025-09-09T00:37:24.704973553Z" level=warning msg="cleaning up after shim disconnected" id=b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96 namespace=k8s.io Sep 9 00:37:24.705057 env[1216]: time="2025-09-09T00:37:24.705041792Z" level=info msg="cleaning up dead shim" Sep 9 00:37:24.712514 env[1216]: time="2025-09-09T00:37:24.712471485Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4007 runtime=io.containerd.runc.v2\n" Sep 9 00:37:25.317882 kubelet[1916]: I0909 00:37:25.317836 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f5d01d-0009-4ff3-baa7-52d16707b4f3" path="/var/lib/kubelet/pods/47f5d01d-0009-4ff3-baa7-52d16707b4f3/volumes" Sep 9 00:37:25.366472 systemd[1]: run-containerd-runc-k8s.io-b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96-runc.2963YW.mount: Deactivated successfully. Sep 9 00:37:25.366574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96-rootfs.mount: Deactivated successfully. Sep 9 00:37:25.598727 kubelet[1916]: E0909 00:37:25.598699 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:25.600710 env[1216]: time="2025-09-09T00:37:25.600665588Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:37:25.611548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654211847.mount: Deactivated successfully. Sep 9 00:37:25.620513 env[1216]: time="2025-09-09T00:37:25.620447897Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61\"" Sep 9 00:37:25.621278 env[1216]: time="2025-09-09T00:37:25.621247367Z" level=info msg="StartContainer for \"e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61\"" Sep 9 00:37:25.636959 systemd[1]: Started cri-containerd-e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61.scope. Sep 9 00:37:25.674976 systemd[1]: cri-containerd-e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61.scope: Deactivated successfully. Sep 9 00:37:25.680220 kubelet[1916]: W0909 00:37:25.678856 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47f5d01d_0009_4ff3_baa7_52d16707b4f3.slice/cri-containerd-9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815.scope WatchSource:0}: container "9d4bed21e591152c0a68f08b8891dc6dcbef569b15227b3e271cce9851479815" in namespace "k8s.io": not found Sep 9 00:37:25.681771 env[1216]: time="2025-09-09T00:37:25.681727040Z" level=info msg="StartContainer for \"e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61\" returns successfully" Sep 9 00:37:25.706493 env[1216]: time="2025-09-09T00:37:25.706426807Z" level=info msg="shim disconnected" id=e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61 Sep 9 00:37:25.706493 env[1216]: time="2025-09-09T00:37:25.706476886Z" level=warning msg="cleaning up after shim disconnected" id=e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61 namespace=k8s.io Sep 9 00:37:25.706493 env[1216]: time="2025-09-09T00:37:25.706487206Z" level=info msg="cleaning up dead shim" Sep 9 00:37:25.713129 env[1216]: time="2025-09-09T00:37:25.713065483Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4064 runtime=io.containerd.runc.v2\n" Sep 9 00:37:26.608312 kubelet[1916]: E0909 00:37:26.607769 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:26.616908 env[1216]: time="2025-09-09T00:37:26.614085110Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:37:26.647011 env[1216]: time="2025-09-09T00:37:26.646931590Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1\"" Sep 9 00:37:26.647727 env[1216]: time="2025-09-09T00:37:26.647683942Z" level=info msg="StartContainer for \"a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1\"" Sep 9 00:37:26.676070 systemd[1]: Started cri-containerd-a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1.scope. Sep 9 00:37:26.711827 systemd[1]: cri-containerd-a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1.scope: Deactivated successfully. Sep 9 00:37:26.714556 env[1216]: time="2025-09-09T00:37:26.714448170Z" level=info msg="StartContainer for \"a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1\" returns successfully" Sep 9 00:37:26.734047 env[1216]: time="2025-09-09T00:37:26.733989756Z" level=info msg="shim disconnected" id=a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1 Sep 9 00:37:26.734047 env[1216]: time="2025-09-09T00:37:26.734045435Z" level=warning msg="cleaning up after shim disconnected" id=a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1 namespace=k8s.io Sep 9 00:37:26.734271 env[1216]: time="2025-09-09T00:37:26.734055195Z" level=info msg="cleaning up dead shim" Sep 9 00:37:26.740731 env[1216]: time="2025-09-09T00:37:26.740680643Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4119 runtime=io.containerd.runc.v2\n" Sep 9 00:37:27.368149 systemd[1]: run-containerd-runc-k8s.io-a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1-runc.nI9nIV.mount: Deactivated successfully. Sep 9 00:37:27.368243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1-rootfs.mount: Deactivated successfully. Sep 9 00:37:27.611762 kubelet[1916]: E0909 00:37:27.611676 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:27.615117 env[1216]: time="2025-09-09T00:37:27.614349167Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:37:27.682121 env[1216]: time="2025-09-09T00:37:27.682002978Z" level=info msg="CreateContainer within sandbox \"0393569f275b172e2dc669bf98503a52a95d181aac677f1eafffddabcfabea3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f357cf7760ee76b16933d2f35d4a4cf98f20654ebb4e0513e20f1277cb8732df\"" Sep 9 00:37:27.684912 env[1216]: time="2025-09-09T00:37:27.684882711Z" level=info msg="StartContainer for \"f357cf7760ee76b16933d2f35d4a4cf98f20654ebb4e0513e20f1277cb8732df\"" Sep 9 00:37:27.704946 systemd[1]: Started cri-containerd-f357cf7760ee76b16933d2f35d4a4cf98f20654ebb4e0513e20f1277cb8732df.scope. Sep 9 00:37:27.762225 env[1216]: time="2025-09-09T00:37:27.762164833Z" level=info msg="StartContainer for \"f357cf7760ee76b16933d2f35d4a4cf98f20654ebb4e0513e20f1277cb8732df\" returns successfully" Sep 9 00:37:28.041964 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 9 00:37:28.627856 kubelet[1916]: E0909 00:37:28.627828 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:28.795604 kubelet[1916]: W0909 00:37:28.795552 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67448ef_0ab5_47c3_b999_9290a7205baf.slice/cri-containerd-abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0.scope WatchSource:0}: task abb0be4d06e2737ac37d9e16bfa1a9d79c602727ef8ba98c943b20cd7d21a3a0 not found: not found Sep 9 00:37:29.961135 kubelet[1916]: E0909 00:37:29.961098 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:30.989206 systemd-networkd[1046]: lxc_health: Link UP Sep 9 00:37:30.997442 systemd-networkd[1046]: lxc_health: Gained carrier Sep 9 00:37:30.997799 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:37:31.905284 kubelet[1916]: W0909 00:37:31.905241 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67448ef_0ab5_47c3_b999_9290a7205baf.slice/cri-containerd-b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96.scope WatchSource:0}: task b5fa615ba0b55a40746550f54cbefd090fcea4d249c034335f65640b938efe96 not found: not found Sep 9 00:37:31.961136 kubelet[1916]: E0909 00:37:31.961101 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:31.988436 kubelet[1916]: I0909 00:37:31.988363 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rddrn" podStartSLOduration=8.988346747 podStartE2EDuration="8.988346747s" podCreationTimestamp="2025-09-09 00:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:28.653335801 +0000 UTC m=+89.427913357" watchObservedRunningTime="2025-09-09 00:37:31.988346747 +0000 UTC m=+92.762924303" Sep 9 00:37:32.279782 systemd-networkd[1046]: lxc_health: Gained IPv6LL Sep 9 00:37:32.634499 kubelet[1916]: E0909 00:37:32.634469 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:33.017897 systemd[1]: run-containerd-runc-k8s.io-f357cf7760ee76b16933d2f35d4a4cf98f20654ebb4e0513e20f1277cb8732df-runc.mViO5J.mount: Deactivated successfully. Sep 9 00:37:33.316425 kubelet[1916]: E0909 00:37:33.316327 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:33.636607 kubelet[1916]: E0909 00:37:33.636575 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:35.016493 kubelet[1916]: W0909 00:37:35.016440 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67448ef_0ab5_47c3_b999_9290a7205baf.slice/cri-containerd-e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61.scope WatchSource:0}: task e28e3a9b2509c241d5da6a0fb4dbb3cc1ef4e49c2041dc8acb8ee9183415cb61 not found: not found Sep 9 00:37:37.330614 systemd[1]: run-containerd-runc-k8s.io-f357cf7760ee76b16933d2f35d4a4cf98f20654ebb4e0513e20f1277cb8732df-runc.5tL2pd.mount: Deactivated successfully. Sep 9 00:37:37.390969 kubelet[1916]: E0909 00:37:37.390931 1916 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39812->127.0.0.1:40443: write tcp 127.0.0.1:39812->127.0.0.1:40443: write: broken pipe Sep 9 00:37:37.393180 sshd[3726]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:37.396119 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:60402.service: Deactivated successfully. Sep 9 00:37:37.396954 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:37:37.397562 systemd-logind[1206]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:37:37.398477 systemd-logind[1206]: Removed session 25. Sep 9 00:37:38.123573 kubelet[1916]: W0909 00:37:38.123499 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67448ef_0ab5_47c3_b999_9290a7205baf.slice/cri-containerd-a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1.scope WatchSource:0}: task a515ae5e0fe386486d12aeba8daa1d8b2fd50b5051d5576034cceb29df8a04d1 not found: not found