Sep 6 00:13:17.694950 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 00:13:17.694970 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 00:13:17.694978 kernel: efi: EFI v2.70 by EDK II Sep 6 00:13:17.694984 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 6 00:13:17.694989 kernel: random: crng init done Sep 6 00:13:17.694994 kernel: ACPI: Early table checksum verification disabled Sep 6 00:13:17.695001 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 6 00:13:17.695007 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:13:17.695013 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695018 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695024 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695030 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695035 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695040 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695048 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695054 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695060 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:13:17.695066 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 6 00:13:17.695084 kernel: NUMA: Failed to initialise from firmware Sep 6 00:13:17.695090 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:13:17.695096 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Sep 6 00:13:17.695102 kernel: Zone ranges: Sep 6 00:13:17.695109 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:13:17.695116 kernel: DMA32 empty Sep 6 00:13:17.695121 kernel: Normal empty Sep 6 00:13:17.695127 kernel: Movable zone start for each node Sep 6 00:13:17.695133 kernel: Early memory node ranges Sep 6 00:13:17.695139 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 6 00:13:17.695145 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 6 00:13:17.695150 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 6 00:13:17.695156 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 6 00:13:17.695162 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 6 00:13:17.695167 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 6 00:13:17.695173 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 6 00:13:17.695179 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:13:17.695186 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 6 00:13:17.695192 kernel: psci: probing for conduit method from ACPI. Sep 6 00:13:17.695197 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 00:13:17.695203 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:13:17.695209 kernel: psci: Trusted OS migration not required Sep 6 00:13:17.695217 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:13:17.695223 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 00:13:17.695231 kernel: ACPI: SRAT not present Sep 6 00:13:17.695237 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 00:13:17.695244 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 00:13:17.695250 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 6 00:13:17.695256 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:13:17.695262 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:13:17.695268 kernel: CPU features: detected: Hardware dirty bit management Sep 6 00:13:17.695274 kernel: CPU features: detected: Spectre-v4 Sep 6 00:13:17.695280 kernel: CPU features: detected: Spectre-BHB Sep 6 00:13:17.695288 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:13:17.695294 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:13:17.695300 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 00:13:17.695306 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 00:13:17.695312 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 6 00:13:17.695318 kernel: Policy zone: DMA Sep 6 00:13:17.695325 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:13:17.695332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:13:17.695338 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:13:17.695344 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:13:17.695350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:13:17.695358 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114952K reserved, 0K cma-reserved) Sep 6 00:13:17.695365 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:13:17.695371 kernel: trace event string verifier disabled Sep 6 00:13:17.695377 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:13:17.695384 kernel: rcu: RCU event tracing is enabled. Sep 6 00:13:17.695390 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:13:17.695397 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:13:17.695404 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:13:17.695410 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:13:17.695416 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:13:17.695422 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:13:17.695429 kernel: GICv3: 256 SPIs implemented Sep 6 00:13:17.695435 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:13:17.695441 kernel: GICv3: Distributor has no Range Selector support Sep 6 00:13:17.695447 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:13:17.695454 kernel: GICv3: 16 PPIs implemented Sep 6 00:13:17.695460 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 00:13:17.695466 kernel: ACPI: SRAT not present Sep 6 00:13:17.695472 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 00:13:17.695479 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:13:17.695485 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:13:17.695492 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 6 00:13:17.695498 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 6 00:13:17.695506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:13:17.695512 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 00:13:17.695518 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 00:13:17.695525 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 00:13:17.695531 kernel: arm-pv: using stolen time PV Sep 6 00:13:17.695537 kernel: Console: colour dummy device 80x25 Sep 6 00:13:17.695543 kernel: ACPI: Core revision 20210730 Sep 6 00:13:17.695550 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 00:13:17.695556 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:13:17.695563 kernel: LSM: Security Framework initializing Sep 6 00:13:17.695570 kernel: SELinux: Initializing. Sep 6 00:13:17.695577 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:13:17.695583 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:13:17.695589 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:13:17.695595 kernel: Platform MSI: ITS@0x8080000 domain created Sep 6 00:13:17.695601 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 6 00:13:17.695608 kernel: Remapping and enabling EFI services. Sep 6 00:13:17.695614 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:13:17.695621 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:13:17.695628 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 00:13:17.695635 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 6 00:13:17.695641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:13:17.695647 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 00:13:17.695654 kernel: Detected PIPT I-cache on CPU2 Sep 6 00:13:17.695660 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 6 00:13:17.695667 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 6 00:13:17.695673 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:13:17.695679 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 6 00:13:17.695686 kernel: Detected PIPT I-cache on CPU3 Sep 6 00:13:17.695693 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 6 00:13:17.695700 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 6 00:13:17.695706 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:13:17.695713 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 6 00:13:17.695723 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:13:17.695731 kernel: SMP: Total of 4 processors activated. Sep 6 00:13:17.695738 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:13:17.695745 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 00:13:17.695759 kernel: CPU features: detected: Common not Private translations Sep 6 00:13:17.695766 kernel: CPU features: detected: CRC32 instructions Sep 6 00:13:17.695773 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 00:13:17.695779 kernel: CPU features: detected: LSE atomic instructions Sep 6 00:13:17.695790 kernel: CPU features: detected: Privileged Access Never Sep 6 00:13:17.695798 kernel: CPU features: detected: RAS Extension Support Sep 6 00:13:17.695804 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 00:13:17.695811 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:13:17.695818 kernel: alternatives: patching kernel code Sep 6 00:13:17.695826 kernel: devtmpfs: initialized Sep 6 00:13:17.695833 kernel: KASLR enabled Sep 6 00:13:17.695840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:13:17.695846 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:13:17.695853 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:13:17.695860 kernel: SMBIOS 3.0.0 present. Sep 6 00:13:17.695866 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 6 00:13:17.695873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:13:17.695880 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:13:17.695888 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:13:17.695895 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:13:17.695901 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:13:17.695908 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Sep 6 00:13:17.695915 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:13:17.695921 kernel: cpuidle: using governor menu Sep 6 00:13:17.695928 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:13:17.695938 kernel: ASID allocator initialised with 32768 entries Sep 6 00:13:17.695944 kernel: ACPI: bus type PCI registered Sep 6 00:13:17.695952 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:13:17.695959 kernel: Serial: AMBA PL011 UART driver Sep 6 00:13:17.695966 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:13:17.695972 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:13:17.695979 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:13:17.695986 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:13:17.695993 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:13:17.696000 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:13:17.696006 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:13:17.696014 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:13:17.696021 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:13:17.696027 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:13:17.696034 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:13:17.696040 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:13:17.696047 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:13:17.696054 kernel: ACPI: Interpreter enabled Sep 6 00:13:17.696061 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:13:17.696067 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:13:17.696082 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 00:13:17.696089 kernel: printk: console [ttyAMA0] enabled Sep 6 00:13:17.696096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:13:17.696220 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:13:17.696285 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:13:17.696346 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:13:17.696405 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 00:13:17.696474 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 00:13:17.696484 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 00:13:17.696491 kernel: PCI host bridge to bus 0000:00 Sep 6 00:13:17.696563 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 00:13:17.696619 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:13:17.696679 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 00:13:17.696733 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:13:17.696815 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 6 00:13:17.696890 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:13:17.696953 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 6 00:13:17.697014 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 6 00:13:17.697085 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:13:17.697146 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:13:17.698193 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 6 00:13:17.698277 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 6 00:13:17.698339 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 00:13:17.698394 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:13:17.698458 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 00:13:17.698469 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:13:17.698476 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:13:17.698484 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:13:17.698490 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:13:17.698500 kernel: iommu: Default domain type: Translated Sep 6 00:13:17.698507 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:13:17.698513 kernel: vgaarb: loaded Sep 6 00:13:17.698520 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:13:17.698527 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:13:17.698534 kernel: PTP clock support registered Sep 6 00:13:17.698540 kernel: Registered efivars operations Sep 6 00:13:17.698547 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:13:17.698554 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:13:17.698562 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:13:17.698569 kernel: pnp: PnP ACPI init Sep 6 00:13:17.698643 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 00:13:17.698653 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:13:17.698660 kernel: NET: Registered PF_INET protocol family Sep 6 00:13:17.698667 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:13:17.698674 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:13:17.698681 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:13:17.698689 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:13:17.698696 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:13:17.698703 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:13:17.698710 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:13:17.698717 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:13:17.698724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:13:17.698731 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:13:17.698737 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 6 00:13:17.698744 kernel: kvm [1]: HYP mode not available Sep 6 00:13:17.698759 kernel: Initialise system trusted keyrings Sep 6 00:13:17.698767 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:13:17.698774 kernel: Key type asymmetric registered Sep 6 00:13:17.698781 kernel: Asymmetric key parser 'x509' registered Sep 6 00:13:17.698872 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:13:17.698882 kernel: io scheduler mq-deadline registered Sep 6 00:13:17.698889 kernel: io scheduler kyber registered Sep 6 00:13:17.698896 kernel: io scheduler bfq registered Sep 6 00:13:17.698903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:13:17.698913 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:13:17.698921 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:13:17.701746 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 6 00:13:17.702020 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:13:17.702028 kernel: thunder_xcv, ver 1.0 Sep 6 00:13:17.702034 kernel: thunder_bgx, ver 1.0 Sep 6 00:13:17.702044 kernel: nicpf, ver 1.0 Sep 6 00:13:17.702051 kernel: nicvf, ver 1.0 Sep 6 00:13:17.702179 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:13:17.702247 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:13:17 UTC (1757117597) Sep 6 00:13:17.702259 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:13:17.702266 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:13:17.702273 kernel: Segment Routing with IPv6 Sep 6 00:13:17.702280 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:13:17.702287 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:13:17.702293 kernel: Key type dns_resolver registered Sep 6 00:13:17.702301 kernel: registered taskstats version 1 Sep 6 00:13:17.702309 kernel: Loading compiled-in X.509 certificates Sep 6 00:13:17.702316 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 00:13:17.702323 kernel: Key type .fscrypt registered Sep 6 00:13:17.702332 kernel: Key type fscrypt-provisioning registered Sep 6 00:13:17.702338 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:13:17.702408 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:13:17.702421 kernel: ima: No architecture policies found Sep 6 00:13:17.702429 kernel: clk: Disabling unused clocks Sep 6 00:13:17.702436 kernel: Freeing unused kernel memory: 36416K Sep 6 00:13:17.702448 kernel: Run /init as init process Sep 6 00:13:17.702454 kernel: with arguments: Sep 6 00:13:17.702461 kernel: /init Sep 6 00:13:17.702468 kernel: with environment: Sep 6 00:13:17.702475 kernel: HOME=/ Sep 6 00:13:17.702528 kernel: TERM=linux Sep 6 00:13:17.702575 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:13:17.702586 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:13:17.702629 systemd[1]: Detected virtualization kvm. Sep 6 00:13:17.702637 systemd[1]: Detected architecture arm64. Sep 6 00:13:17.702649 systemd[1]: Running in initrd. Sep 6 00:13:17.702658 systemd[1]: No hostname configured, using default hostname. Sep 6 00:13:17.702666 systemd[1]: Hostname set to . Sep 6 00:13:17.702673 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:13:17.702680 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:13:17.702723 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:13:17.702734 systemd[1]: Reached target cryptsetup.target. Sep 6 00:13:17.702741 systemd[1]: Reached target paths.target. Sep 6 00:13:17.702804 systemd[1]: Reached target slices.target. Sep 6 00:13:17.702811 systemd[1]: Reached target swap.target. Sep 6 00:13:17.702818 systemd[1]: Reached target timers.target. Sep 6 00:13:17.702828 systemd[1]: Listening on iscsid.socket. Sep 6 00:13:17.702836 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:13:17.702848 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:13:17.702894 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:13:17.702905 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:13:17.702912 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:13:17.702920 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:13:17.702927 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:13:17.702934 systemd[1]: Reached target sockets.target. Sep 6 00:13:17.702941 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:13:17.702987 systemd[1]: Finished network-cleanup.service. Sep 6 00:13:17.702999 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:13:17.703007 systemd[1]: Starting systemd-journald.service... Sep 6 00:13:17.703014 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:13:17.703055 systemd[1]: Starting systemd-resolved.service... Sep 6 00:13:17.703118 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:13:17.703127 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:13:17.703135 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:13:17.703142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:13:17.703150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:13:17.703162 kernel: audit: type=1130 audit(1757117597.698:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.703173 systemd-journald[290]: Journal started Sep 6 00:13:17.703228 systemd-journald[290]: Runtime Journal (/run/log/journal/e746b0c63c384f35a1920dab758fd363) is 6.0M, max 48.7M, 42.6M free. Sep 6 00:13:17.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.686894 systemd-modules-load[291]: Inserted module 'overlay' Sep 6 00:13:17.704700 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:13:17.707831 kernel: audit: type=1130 audit(1757117597.705:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.707939 systemd[1]: Started systemd-journald.service. Sep 6 00:13:17.707952 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:13:17.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.710285 kernel: Bridge firewalling registered Sep 6 00:13:17.710762 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 6 00:13:17.713485 kernel: audit: type=1130 audit(1757117597.710:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.711699 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:13:17.712886 systemd-resolved[292]: Positive Trust Anchors: Sep 6 00:13:17.712894 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:13:17.712922 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:13:17.717130 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 6 00:13:17.733362 kernel: SCSI subsystem initialized Sep 6 00:13:17.733390 kernel: audit: type=1130 audit(1757117597.727:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.717954 systemd[1]: Started systemd-resolved.service. Sep 6 00:13:17.728130 systemd[1]: Reached target nss-lookup.target. Sep 6 00:13:17.737432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:13:17.737452 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:13:17.737468 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:13:17.739627 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 6 00:13:17.740405 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:13:17.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.742459 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:13:17.748233 kernel: audit: type=1130 audit(1757117597.741:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.749726 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:13:17.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.751348 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:13:17.758195 kernel: audit: type=1130 audit(1757117597.750:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.758559 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:13:17.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.762092 kernel: audit: type=1130 audit(1757117597.759:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.763418 dracut-cmdline[311]: dracut-dracut-053 Sep 6 00:13:17.765698 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:13:17.825115 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:13:17.837095 kernel: iscsi: registered transport (tcp) Sep 6 00:13:17.852097 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:13:17.852129 kernel: QLogic iSCSI HBA Driver Sep 6 00:13:17.886328 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:13:17.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.887770 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:13:17.890580 kernel: audit: type=1130 audit(1757117597.886:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:17.930108 kernel: raid6: neonx8 gen() 13724 MB/s Sep 6 00:13:17.947093 kernel: raid6: neonx8 xor() 10778 MB/s Sep 6 00:13:17.964089 kernel: raid6: neonx4 gen() 13482 MB/s Sep 6 00:13:17.981087 kernel: raid6: neonx4 xor() 11066 MB/s Sep 6 00:13:17.998089 kernel: raid6: neonx2 gen() 12903 MB/s Sep 6 00:13:18.015191 kernel: raid6: neonx2 xor() 10304 MB/s Sep 6 00:13:18.032090 kernel: raid6: neonx1 gen() 10612 MB/s Sep 6 00:13:18.049142 kernel: raid6: neonx1 xor() 8770 MB/s Sep 6 00:13:18.066104 kernel: raid6: int64x8 gen() 6266 MB/s Sep 6 00:13:18.083141 kernel: raid6: int64x8 xor() 3541 MB/s Sep 6 00:13:18.100110 kernel: raid6: int64x4 gen() 7190 MB/s Sep 6 00:13:18.117115 kernel: raid6: int64x4 xor() 3824 MB/s Sep 6 00:13:18.134117 kernel: raid6: int64x2 gen() 6106 MB/s Sep 6 00:13:18.151118 kernel: raid6: int64x2 xor() 3316 MB/s Sep 6 00:13:18.168119 kernel: raid6: int64x1 gen() 5040 MB/s Sep 6 00:13:18.185409 kernel: raid6: int64x1 xor() 2644 MB/s Sep 6 00:13:18.185477 kernel: raid6: using algorithm neonx8 gen() 13724 MB/s Sep 6 00:13:18.185487 kernel: raid6: .... xor() 10778 MB/s, rmw enabled Sep 6 00:13:18.185496 kernel: raid6: using neon recovery algorithm Sep 6 00:13:18.197372 kernel: xor: measuring software checksum speed Sep 6 00:13:18.197432 kernel: 8regs : 17130 MB/sec Sep 6 00:13:18.197443 kernel: 32regs : 20712 MB/sec Sep 6 00:13:18.198362 kernel: arm64_neon : 27804 MB/sec Sep 6 00:13:18.198385 kernel: xor: using function: arm64_neon (27804 MB/sec) Sep 6 00:13:18.252105 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 00:13:18.265235 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:13:18.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:18.271107 kernel: audit: type=1130 audit(1757117598.265:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:18.271000 audit: BPF prog-id=7 op=LOAD Sep 6 00:13:18.272000 audit: BPF prog-id=8 op=LOAD Sep 6 00:13:18.273398 systemd[1]: Starting systemd-udevd.service... Sep 6 00:13:18.288819 systemd-udevd[490]: Using default interface naming scheme 'v252'. Sep 6 00:13:18.292275 systemd[1]: Started systemd-udevd.service. Sep 6 00:13:18.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:18.295051 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:13:18.305815 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Sep 6 00:13:18.335211 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:13:18.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:18.336620 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:13:18.380223 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:13:18.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:18.408846 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:13:18.413390 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:13:18.413404 kernel: GPT:9289727 != 19775487 Sep 6 00:13:18.413413 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:13:18.413422 kernel: GPT:9289727 != 19775487 Sep 6 00:13:18.413430 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:13:18.413439 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:13:18.432016 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:13:18.433901 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (538) Sep 6 00:13:18.433170 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:13:18.439114 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:13:18.444905 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:13:18.448218 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:13:18.449703 systemd[1]: Starting disk-uuid.service... Sep 6 00:13:18.456521 disk-uuid[561]: Primary Header is updated. Sep 6 00:13:18.456521 disk-uuid[561]: Secondary Entries is updated. Sep 6 00:13:18.456521 disk-uuid[561]: Secondary Header is updated. Sep 6 00:13:18.461106 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:13:18.464089 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:13:18.466092 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:13:19.471080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:13:19.471386 disk-uuid[562]: The operation has completed successfully. Sep 6 00:13:19.506366 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:13:19.506464 systemd[1]: Finished disk-uuid.service. Sep 6 00:13:19.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.514637 systemd[1]: Starting verity-setup.service... Sep 6 00:13:19.529102 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:13:19.549836 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:13:19.552942 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:13:19.555310 systemd[1]: Finished verity-setup.service. Sep 6 00:13:19.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.607092 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:13:19.607602 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:13:19.608306 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:13:19.609031 systemd[1]: Starting ignition-setup.service... Sep 6 00:13:19.611384 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:13:19.622391 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:13:19.622425 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:13:19.622440 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:13:19.632235 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:13:19.638533 systemd[1]: Finished ignition-setup.service. Sep 6 00:13:19.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.640113 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:13:19.694862 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:13:19.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.696000 audit: BPF prog-id=9 op=LOAD Sep 6 00:13:19.697218 systemd[1]: Starting systemd-networkd.service... Sep 6 00:13:19.697261 ignition[656]: Ignition 2.14.0 Sep 6 00:13:19.697283 ignition[656]: Stage: fetch-offline Sep 6 00:13:19.697317 ignition[656]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:13:19.697326 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:13:19.697450 ignition[656]: parsed url from cmdline: "" Sep 6 00:13:19.697454 ignition[656]: no config URL provided Sep 6 00:13:19.697458 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:13:19.697464 ignition[656]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:13:19.697715 ignition[656]: op(1): [started] loading QEMU firmware config module Sep 6 00:13:19.697720 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:13:19.705861 ignition[656]: op(1): [finished] loading QEMU firmware config module Sep 6 00:13:19.717508 systemd-networkd[738]: lo: Link UP Sep 6 00:13:19.717521 systemd-networkd[738]: lo: Gained carrier Sep 6 00:13:19.717898 systemd-networkd[738]: Enumeration completed Sep 6 00:13:19.718066 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:13:19.718954 systemd-networkd[738]: eth0: Link UP Sep 6 00:13:19.718957 systemd-networkd[738]: eth0: Gained carrier Sep 6 00:13:19.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.722783 systemd[1]: Started systemd-networkd.service. Sep 6 00:13:19.723625 systemd[1]: Reached target network.target. Sep 6 00:13:19.725899 systemd[1]: Starting iscsiuio.service... Sep 6 00:13:19.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.733139 systemd[1]: Started iscsiuio.service. Sep 6 00:13:19.734768 systemd[1]: Starting iscsid.service... Sep 6 00:13:19.737955 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:13:19.737955 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:13:19.737955 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:13:19.737955 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:13:19.737955 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:13:19.737955 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:13:19.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.740791 systemd[1]: Started iscsid.service. Sep 6 00:13:19.742144 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:13:19.745440 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:13:19.755267 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:13:19.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.756190 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:13:19.757579 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:13:19.759035 systemd[1]: Reached target remote-fs.target. Sep 6 00:13:19.761248 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:13:19.768523 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:13:19.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.774939 ignition[656]: parsing config with SHA512: c70e4f98d926d86eb1a150b817b63a5928d0b02e08060fd6d25d935185d3224d882266a7849ecb559e85b2efa003252fe151a4919632dc4f757b2f4fb7e8a2dd Sep 6 00:13:19.784866 unknown[656]: fetched base config from "system" Sep 6 00:13:19.784875 unknown[656]: fetched user config from "qemu" Sep 6 00:13:19.786921 ignition[656]: fetch-offline: fetch-offline passed Sep 6 00:13:19.787002 ignition[656]: Ignition finished successfully Sep 6 00:13:19.789323 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:13:19.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.790265 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:13:19.790999 systemd[1]: Starting ignition-kargs.service... Sep 6 00:13:19.800845 ignition[760]: Ignition 2.14.0 Sep 6 00:13:19.800854 ignition[760]: Stage: kargs Sep 6 00:13:19.800950 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:13:19.800960 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:13:19.801970 ignition[760]: kargs: kargs passed Sep 6 00:13:19.802019 ignition[760]: Ignition finished successfully Sep 6 00:13:19.805444 systemd[1]: Finished ignition-kargs.service. Sep 6 00:13:19.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.807005 systemd[1]: Starting ignition-disks.service... Sep 6 00:13:19.813920 ignition[766]: Ignition 2.14.0 Sep 6 00:13:19.813935 ignition[766]: Stage: disks Sep 6 00:13:19.814038 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:13:19.814047 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:13:19.815762 systemd[1]: Finished ignition-disks.service. Sep 6 00:13:19.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.814905 ignition[766]: disks: disks passed Sep 6 00:13:19.817466 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:13:19.814944 ignition[766]: Ignition finished successfully Sep 6 00:13:19.818610 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:13:19.819916 systemd[1]: Reached target local-fs.target. Sep 6 00:13:19.821058 systemd[1]: Reached target sysinit.target. Sep 6 00:13:19.822374 systemd[1]: Reached target basic.target. Sep 6 00:13:19.824400 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:13:19.836494 systemd-fsck[774]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 6 00:13:19.840548 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:13:19.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.842423 systemd[1]: Mounting sysroot.mount... Sep 6 00:13:19.848084 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:13:19.849012 systemd[1]: Mounted sysroot.mount. Sep 6 00:13:19.849681 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:13:19.851736 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:13:19.854110 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:13:19.854158 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:13:19.854180 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:13:19.856480 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:13:19.859318 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:13:19.864126 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:13:19.868643 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:13:19.872704 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:13:19.877460 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:13:19.905590 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:13:19.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.906993 systemd[1]: Starting ignition-mount.service... Sep 6 00:13:19.908203 systemd[1]: Starting sysroot-boot.service... Sep 6 00:13:19.913113 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:13:19.921300 ignition[826]: INFO : Ignition 2.14.0 Sep 6 00:13:19.921300 ignition[826]: INFO : Stage: mount Sep 6 00:13:19.922893 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:13:19.922893 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:13:19.922893 ignition[826]: INFO : mount: mount passed Sep 6 00:13:19.922893 ignition[826]: INFO : Ignition finished successfully Sep 6 00:13:19.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:19.924889 systemd[1]: Finished ignition-mount.service. Sep 6 00:13:19.932503 systemd[1]: Finished sysroot-boot.service. Sep 6 00:13:19.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:20.566910 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:13:20.574157 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (836) Sep 6 00:13:20.574195 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:13:20.575387 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:13:20.575416 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:13:20.580372 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:13:20.587183 systemd[1]: Starting ignition-files.service... Sep 6 00:13:20.602181 ignition[856]: INFO : Ignition 2.14.0 Sep 6 00:13:20.602181 ignition[856]: INFO : Stage: files Sep 6 00:13:20.604574 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:13:20.604574 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:13:20.604574 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:13:20.608679 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:13:20.608679 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:13:20.611715 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:13:20.611715 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:13:20.614359 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:13:20.611975 unknown[856]: wrote ssh authorized keys file for user: core Sep 6 00:13:20.618974 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:13:20.618974 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 00:13:20.618974 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:13:20.618974 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 6 00:13:20.683983 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:13:20.942370 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 6 00:13:20.944610 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:13:20.944610 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 00:13:20.970313 systemd-networkd[738]: eth0: Gained IPv6LL Sep 6 00:13:21.139660 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 6 00:13:21.245764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:13:21.245764 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:13:21.252665 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 00:13:21.497579 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 6 00:13:21.937373 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:13:21.939243 ignition[856]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 6 00:13:21.940242 ignition[856]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:13:21.940242 ignition[856]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 00:13:21.940242 ignition[856]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 6 00:13:21.940242 ignition[856]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 6 00:13:21.940242 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:13:21.948178 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:13:21.976989 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:13:21.980180 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:13:21.980180 ignition[856]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:13:21.980180 ignition[856]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:13:21.980180 ignition[856]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:13:21.980180 ignition[856]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:13:21.980180 ignition[856]: INFO : files: files passed Sep 6 00:13:21.980180 ignition[856]: INFO : Ignition finished successfully Sep 6 00:13:21.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:21.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:21.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:21.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:21.978592 systemd[1]: Finished ignition-files.service. Sep 6 00:13:21.980130 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:13:21.980794 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:13:21.993849 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:13:21.981425 systemd[1]: Starting ignition-quench.service... Sep 6 00:13:21.995706 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:13:21.986670 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:13:21.986954 systemd[1]: Finished ignition-quench.service. Sep 6 00:13:21.988423 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:13:21.989331 systemd[1]: Reached target ignition-complete.target. Sep 6 00:13:21.991146 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:13:22.003494 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:13:22.003595 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:13:22.010125 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 6 00:13:22.010149 kernel: audit: type=1130 audit(1757117602.004:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.010160 kernel: audit: type=1131 audit(1757117602.007:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.010125 systemd[1]: Reached target initrd-fs.target. Sep 6 00:13:22.010700 systemd[1]: Reached target initrd.target. Sep 6 00:13:22.011777 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:13:22.012519 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:13:22.023111 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:13:22.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.026892 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:13:22.027987 kernel: audit: type=1130 audit(1757117602.023:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.035239 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:13:22.035939 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:13:22.037269 systemd[1]: Stopped target timers.target. Sep 6 00:13:22.038424 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:13:22.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.038532 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:13:22.043321 kernel: audit: type=1131 audit(1757117602.039:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.042036 systemd[1]: Stopped target initrd.target. Sep 6 00:13:22.042719 systemd[1]: Stopped target basic.target. Sep 6 00:13:22.043881 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:13:22.044915 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:13:22.045946 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:13:22.047173 systemd[1]: Stopped target remote-fs.target. Sep 6 00:13:22.048299 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:13:22.049467 systemd[1]: Stopped target sysinit.target. Sep 6 00:13:22.050549 systemd[1]: Stopped target local-fs.target. Sep 6 00:13:22.051591 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:13:22.052640 systemd[1]: Stopped target swap.target. Sep 6 00:13:22.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.057095 kernel: audit: type=1131 audit(1757117602.054:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.053604 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:13:22.053707 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:13:22.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.057047 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:13:22.057670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:13:22.065317 kernel: audit: type=1131 audit(1757117602.058:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.065337 kernel: audit: type=1131 audit(1757117602.062:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.057778 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:13:22.061372 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:13:22.061470 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:13:22.065360 systemd[1]: Stopped target paths.target. Sep 6 00:13:22.065968 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:13:22.069129 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:13:22.070445 systemd[1]: Stopped target slices.target. Sep 6 00:13:22.071041 systemd[1]: Stopped target sockets.target. Sep 6 00:13:22.072110 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:13:22.072175 systemd[1]: Closed iscsid.socket. Sep 6 00:13:22.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.073166 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:13:22.080244 kernel: audit: type=1131 audit(1757117602.074:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.080261 kernel: audit: type=1131 audit(1757117602.077:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.073260 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:13:22.074554 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:13:22.074638 systemd[1]: Stopped ignition-files.service. Sep 6 00:13:22.078513 systemd[1]: Stopping ignition-mount.service... Sep 6 00:13:22.081436 systemd[1]: Stopping iscsiuio.service... Sep 6 00:13:22.085905 ignition[897]: INFO : Ignition 2.14.0 Sep 6 00:13:22.085905 ignition[897]: INFO : Stage: umount Sep 6 00:13:22.085905 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:13:22.085905 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:13:22.092574 kernel: audit: type=1131 audit(1757117602.083:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.082596 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:13:22.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.094809 ignition[897]: INFO : umount: umount passed Sep 6 00:13:22.094809 ignition[897]: INFO : Ignition finished successfully Sep 6 00:13:22.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.082766 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:13:22.084798 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:13:22.085459 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:13:22.085618 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:13:22.088829 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:13:22.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.088964 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:13:22.092162 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:13:22.092252 systemd[1]: Stopped iscsiuio.service. Sep 6 00:13:22.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.093670 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:13:22.093755 systemd[1]: Stopped ignition-mount.service. Sep 6 00:13:22.096579 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:13:22.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.100438 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:13:22.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.100535 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:13:22.104078 systemd[1]: Stopped target network.target. Sep 6 00:13:22.106532 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:13:22.106981 systemd[1]: Closed iscsiuio.socket. Sep 6 00:13:22.107711 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:13:22.107763 systemd[1]: Stopped ignition-disks.service. Sep 6 00:13:22.109893 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:13:22.109933 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:13:22.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.113032 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:13:22.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.113091 systemd[1]: Stopped ignition-setup.service. Sep 6 00:13:22.116200 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:13:22.117565 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:13:22.133000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:13:22.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.123124 systemd-networkd[738]: eth0: DHCPv6 lease lost Sep 6 00:13:22.134000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:13:22.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.124744 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:13:22.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.124865 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:13:22.127179 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:13:22.127263 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:13:22.128770 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:13:22.128801 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:13:22.130762 systemd[1]: Stopping network-cleanup.service... Sep 6 00:13:22.131810 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:13:22.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.131867 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:13:22.133770 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:13:22.133810 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:13:22.135537 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:13:22.135576 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:13:22.137067 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:13:22.141558 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:13:22.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.144473 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:13:22.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.144561 systemd[1]: Stopped network-cleanup.service. Sep 6 00:13:22.149574 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:13:22.149654 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:13:22.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.150885 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:13:22.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.150991 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:13:22.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.152369 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:13:22.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.152406 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:13:22.153380 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:13:22.153409 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:13:22.154400 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:13:22.154437 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:13:22.155578 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:13:22.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.155611 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:13:22.156592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:13:22.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:22.156625 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:13:22.157702 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:13:22.157741 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:13:22.159360 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:13:22.160006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:13:22.160057 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:13:22.164720 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:13:22.164812 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:13:22.166044 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:13:22.168082 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:13:22.173744 systemd[1]: Switching root. Sep 6 00:13:22.175000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:13:22.175000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:13:22.176000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:13:22.176000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:13:22.176000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:13:22.190404 iscsid[744]: iscsid shutting down. Sep 6 00:13:22.191047 systemd-journald[290]: Journal stopped Sep 6 00:13:24.325767 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Sep 6 00:13:24.325818 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:13:24.325831 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:13:24.325841 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:13:24.325851 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:13:24.325860 kernel: SELinux: policy capability open_perms=1 Sep 6 00:13:24.325870 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:13:24.325880 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:13:24.325892 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:13:24.325903 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:13:24.325915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:13:24.325925 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:13:24.325940 systemd[1]: Successfully loaded SELinux policy in 40.607ms. Sep 6 00:13:24.325954 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.202ms. Sep 6 00:13:24.325968 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:13:24.325979 systemd[1]: Detected virtualization kvm. Sep 6 00:13:24.325991 systemd[1]: Detected architecture arm64. Sep 6 00:13:24.326002 systemd[1]: Detected first boot. Sep 6 00:13:24.326013 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:13:24.326024 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:13:24.326034 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:13:24.326164 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:13:24.326193 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:13:24.326211 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:13:24.326222 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:13:24.326233 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:13:24.326244 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:13:24.326255 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:13:24.326265 systemd[1]: Created slice system-getty.slice. Sep 6 00:13:24.326276 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:13:24.326287 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:13:24.326298 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:13:24.326309 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:13:24.326319 systemd[1]: Created slice user.slice. Sep 6 00:13:24.326329 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:13:24.326341 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:13:24.326351 systemd[1]: Set up automount boot.automount. Sep 6 00:13:24.326361 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:13:24.326373 systemd[1]: Reached target integritysetup.target. Sep 6 00:13:24.326383 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:13:24.326394 systemd[1]: Reached target remote-fs.target. Sep 6 00:13:24.326405 systemd[1]: Reached target slices.target. Sep 6 00:13:24.326416 systemd[1]: Reached target swap.target. Sep 6 00:13:24.326427 systemd[1]: Reached target torcx.target. Sep 6 00:13:24.326438 systemd[1]: Reached target veritysetup.target. Sep 6 00:13:24.326448 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:13:24.326458 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:13:24.326468 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:13:24.326480 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:13:24.326491 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:13:24.326501 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:13:24.326511 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:13:24.326521 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:13:24.326531 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:13:24.326543 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:13:24.326553 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:13:24.326564 systemd[1]: Mounting media.mount... Sep 6 00:13:24.326576 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:13:24.326586 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:13:24.326657 systemd[1]: Mounting tmp.mount... Sep 6 00:13:24.326674 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:13:24.326686 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:13:24.326696 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:13:24.326706 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:13:24.326717 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:13:24.326738 systemd[1]: Starting modprobe@drm.service... Sep 6 00:13:24.326754 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:13:24.326766 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:13:24.326777 systemd[1]: Starting modprobe@loop.service... Sep 6 00:13:24.326789 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:13:24.326801 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 00:13:24.326812 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 00:13:24.326823 systemd[1]: Starting systemd-journald.service... Sep 6 00:13:24.326834 kernel: fuse: init (API version 7.34) Sep 6 00:13:24.326844 kernel: loop: module loaded Sep 6 00:13:24.326856 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:13:24.326867 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:13:24.326878 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:13:24.326889 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:13:24.326901 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:13:24.326912 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:13:24.326923 systemd[1]: Mounted media.mount. Sep 6 00:13:24.326934 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:13:24.326946 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:13:24.326960 systemd-journald[1032]: Journal started Sep 6 00:13:24.327015 systemd-journald[1032]: Runtime Journal (/run/log/journal/e746b0c63c384f35a1920dab758fd363) is 6.0M, max 48.7M, 42.6M free. Sep 6 00:13:24.254000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:13:24.254000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 00:13:24.324000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:13:24.324000 audit[1032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffcab08b30 a2=4000 a3=1 items=0 ppid=1 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:13:24.324000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:13:24.329138 systemd[1]: Started systemd-journald.service. Sep 6 00:13:24.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.329988 systemd[1]: Mounted tmp.mount. Sep 6 00:13:24.330970 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:13:24.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.331994 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:13:24.332234 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:13:24.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.333134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:13:24.333437 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:13:24.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.334298 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:13:24.334499 systemd[1]: Finished modprobe@drm.service. Sep 6 00:13:24.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.335454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:13:24.335655 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:13:24.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.336586 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:13:24.336980 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:13:24.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.337909 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:13:24.338241 systemd[1]: Finished modprobe@loop.service. Sep 6 00:13:24.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.339297 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:13:24.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.340404 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:13:24.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.341577 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:13:24.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.342658 systemd[1]: Reached target network-pre.target. Sep 6 00:13:24.345059 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:13:24.346999 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:13:24.347679 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:13:24.351335 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:13:24.353055 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:13:24.353887 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:13:24.355301 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:13:24.356117 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:13:24.357320 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:13:24.359548 systemd-journald[1032]: Time spent on flushing to /var/log/journal/e746b0c63c384f35a1920dab758fd363 is 12.103ms for 931 entries. Sep 6 00:13:24.359548 systemd-journald[1032]: System Journal (/var/log/journal/e746b0c63c384f35a1920dab758fd363) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:13:24.381419 systemd-journald[1032]: Received client request to flush runtime journal. Sep 6 00:13:24.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.361156 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:13:24.362740 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:13:24.363773 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:13:24.366016 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:13:24.367217 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:13:24.369709 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:13:24.383278 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:13:24.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.386403 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:13:24.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.396572 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:13:24.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.397653 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:13:24.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.399813 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:13:24.401792 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:13:24.408623 udevadm[1087]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:13:24.427321 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:13:24.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.734164 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:13:24.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.736130 systemd[1]: Starting systemd-udevd.service... Sep 6 00:13:24.751573 systemd-udevd[1090]: Using default interface naming scheme 'v252'. Sep 6 00:13:24.763544 systemd[1]: Started systemd-udevd.service. Sep 6 00:13:24.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.767569 systemd[1]: Starting systemd-networkd.service... Sep 6 00:13:24.774500 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:13:24.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.806031 systemd[1]: Started systemd-userdbd.service. Sep 6 00:13:24.826742 systemd[1]: Found device dev-ttyAMA0.device. Sep 6 00:13:24.837299 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:13:24.862249 systemd-networkd[1099]: lo: Link UP Sep 6 00:13:24.862259 systemd-networkd[1099]: lo: Gained carrier Sep 6 00:13:24.862633 systemd-networkd[1099]: Enumeration completed Sep 6 00:13:24.862750 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:13:24.862776 systemd[1]: Started systemd-networkd.service. Sep 6 00:13:24.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.864516 systemd-networkd[1099]: eth0: Link UP Sep 6 00:13:24.864527 systemd-networkd[1099]: eth0: Gained carrier Sep 6 00:13:24.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.882591 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:13:24.884575 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:13:24.886289 systemd-networkd[1099]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:13:24.892727 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:13:24.924988 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:13:24.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.925924 systemd[1]: Reached target cryptsetup.target. Sep 6 00:13:24.927846 systemd[1]: Starting lvm2-activation.service... Sep 6 00:13:24.931489 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:13:24.957244 systemd[1]: Finished lvm2-activation.service. Sep 6 00:13:24.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:24.958001 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:13:24.958740 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:13:24.958768 systemd[1]: Reached target local-fs.target. Sep 6 00:13:24.959371 systemd[1]: Reached target machines.target. Sep 6 00:13:24.961158 systemd[1]: Starting ldconfig.service... Sep 6 00:13:24.962104 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:13:24.962161 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:13:24.963371 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:13:24.965508 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:13:24.967581 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:13:24.969622 systemd[1]: Starting systemd-sysext.service... Sep 6 00:13:24.970901 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Sep 6 00:13:24.971955 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:13:24.983551 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:13:24.990279 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:13:24.990533 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:13:25.000550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:13:25.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.048231 kernel: loop0: detected capacity change from 0 to 203944 Sep 6 00:13:25.050513 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:13:25.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.064101 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:13:25.071102 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31) Sep 6 00:13:25.071102 systemd-fsck[1143]: /dev/vda1: 236 files, 117310/258078 clusters Sep 6 00:13:25.073094 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:13:25.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.081098 kernel: loop1: detected capacity change from 0 to 203944 Sep 6 00:13:25.088214 (sd-sysext)[1150]: Using extensions 'kubernetes'. Sep 6 00:13:25.088570 (sd-sysext)[1150]: Merged extensions into '/usr'. Sep 6 00:13:25.104009 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.105665 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:13:25.107860 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:13:25.109714 systemd[1]: Starting modprobe@loop.service... Sep 6 00:13:25.110637 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.110792 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:13:25.111566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:13:25.111734 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:13:25.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.112897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:13:25.113037 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:13:25.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.114343 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:13:25.114531 systemd[1]: Finished modprobe@loop.service. Sep 6 00:13:25.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.116064 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:13:25.116179 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.175413 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:13:25.181168 systemd[1]: Finished ldconfig.service. Sep 6 00:13:25.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.320645 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:13:25.322455 systemd[1]: Mounting boot.mount... Sep 6 00:13:25.324275 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:13:25.328805 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:13:25.331339 systemd[1]: Finished systemd-sysext.service. Sep 6 00:13:25.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.332139 systemd[1]: Mounted boot.mount. Sep 6 00:13:25.335942 systemd[1]: Starting ensure-sysext.service... Sep 6 00:13:25.337988 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:13:25.339044 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:13:25.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.346883 systemd[1]: Reloading. Sep 6 00:13:25.347311 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:13:25.348093 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:13:25.349495 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:13:25.385153 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-09-06T00:13:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:13:25.385180 /usr/lib/systemd/system-generators/torcx-generator[1188]: time="2025-09-06T00:13:25Z" level=info msg="torcx already run" Sep 6 00:13:25.466062 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:13:25.466092 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:13:25.481366 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:13:25.535037 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:13:25.537811 systemd[1]: Starting audit-rules.service... Sep 6 00:13:25.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.539555 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:13:25.541336 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:13:25.543651 systemd[1]: Starting systemd-resolved.service... Sep 6 00:13:25.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.545855 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:13:25.547554 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:13:25.548788 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:13:25.551587 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:13:25.553261 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.554000 audit[1245]: SYSTEM_BOOT pid=1245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.554432 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:13:25.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.556164 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:13:25.557804 systemd[1]: Starting modprobe@loop.service... Sep 6 00:13:25.558442 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.558567 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:13:25.558667 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:13:25.559377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:13:25.559505 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:13:25.560480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:13:25.560600 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:13:25.561694 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:13:25.561851 systemd[1]: Finished modprobe@loop.service. Sep 6 00:13:25.564334 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:13:25.564505 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.566241 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:13:25.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.570194 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.571375 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:13:25.574474 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:13:25.576349 systemd[1]: Starting modprobe@loop.service... Sep 6 00:13:25.576976 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.577177 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:13:25.577299 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:13:25.578316 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:13:25.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.579458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:13:25.579587 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:13:25.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.580592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:13:25.580730 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:13:25.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.583671 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:13:25.583828 systemd[1]: Finished modprobe@loop.service. Sep 6 00:13:25.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.585177 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.586609 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:13:25.589139 systemd[1]: Starting modprobe@drm.service... Sep 6 00:13:25.590848 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:13:25.591673 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.591825 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:13:25.593118 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:13:25.600148 systemd[1]: Starting systemd-update-done.service... Sep 6 00:13:25.600793 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:13:25.601993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:13:25.602169 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:13:25.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.603847 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:13:25.603985 systemd[1]: Finished modprobe@drm.service. Sep 6 00:13:25.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.605363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:13:25.605515 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:13:25.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.607121 systemd[1]: Finished systemd-update-done.service. Sep 6 00:13:25.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.608152 systemd[1]: Finished ensure-sysext.service. Sep 6 00:13:25.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:13:25.610264 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:13:25.610309 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.617000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:13:25.617000 audit[1277]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd43106e0 a2=420 a3=0 items=0 ppid=1233 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:13:25.617000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:13:25.617583 augenrules[1277]: No rules Sep 6 00:13:25.618294 systemd[1]: Finished audit-rules.service. Sep 6 00:13:25.618983 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:13:25.193374 systemd-timesyncd[1242]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:13:25.210474 systemd-journald[1032]: Time jumped backwards, rotating. Sep 6 00:13:25.193448 systemd-timesyncd[1242]: Initial clock synchronization to Sat 2025-09-06 00:13:25.193278 UTC. Sep 6 00:13:25.193763 systemd[1]: Reached target time-set.target. Sep 6 00:13:25.199712 systemd-resolved[1238]: Positive Trust Anchors: Sep 6 00:13:25.199718 systemd-resolved[1238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:13:25.199754 systemd-resolved[1238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:13:25.212570 systemd-resolved[1238]: Defaulting to hostname 'linux'. Sep 6 00:13:25.214431 systemd[1]: Started systemd-resolved.service. Sep 6 00:13:25.215153 systemd[1]: Reached target network.target. Sep 6 00:13:25.215721 systemd[1]: Reached target nss-lookup.target. Sep 6 00:13:25.216297 systemd[1]: Reached target sysinit.target. Sep 6 00:13:25.216964 systemd[1]: Started motdgen.path. Sep 6 00:13:25.217497 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:13:25.218527 systemd[1]: Started logrotate.timer. Sep 6 00:13:25.219179 systemd[1]: Started mdadm.timer. Sep 6 00:13:25.219677 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:13:25.220425 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:13:25.220450 systemd[1]: Reached target paths.target. Sep 6 00:13:25.221008 systemd[1]: Reached target timers.target. Sep 6 00:13:25.221854 systemd[1]: Listening on dbus.socket. Sep 6 00:13:25.223527 systemd[1]: Starting docker.socket... Sep 6 00:13:25.225110 systemd[1]: Listening on sshd.socket. Sep 6 00:13:25.225771 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:13:25.226095 systemd[1]: Listening on docker.socket. Sep 6 00:13:25.226691 systemd[1]: Reached target sockets.target. Sep 6 00:13:25.227350 systemd[1]: Reached target basic.target. Sep 6 00:13:25.228062 systemd[1]: System is tainted: cgroupsv1 Sep 6 00:13:25.228107 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.228128 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:13:25.229109 systemd[1]: Starting containerd.service... Sep 6 00:13:25.230733 systemd[1]: Starting dbus.service... Sep 6 00:13:25.232395 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:13:25.234287 systemd[1]: Starting extend-filesystems.service... Sep 6 00:13:25.235077 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:13:25.236270 systemd[1]: Starting motdgen.service... Sep 6 00:13:25.239477 jq[1295]: false Sep 6 00:13:25.237882 systemd[1]: Starting prepare-helm.service... Sep 6 00:13:25.239605 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:13:25.241424 systemd[1]: Starting sshd-keygen.service... Sep 6 00:13:25.243897 systemd[1]: Starting systemd-logind.service... Sep 6 00:13:25.244533 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:13:25.244597 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:13:25.251076 systemd[1]: Starting update-engine.service... Sep 6 00:13:25.256312 extend-filesystems[1296]: Found loop1 Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda1 Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda2 Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda3 Sep 6 00:13:25.256312 extend-filesystems[1296]: Found usr Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda4 Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda6 Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda7 Sep 6 00:13:25.256312 extend-filesystems[1296]: Found vda9 Sep 6 00:13:25.256312 extend-filesystems[1296]: Checking size of /dev/vda9 Sep 6 00:13:25.252719 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:13:25.254890 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:13:25.278112 jq[1314]: true Sep 6 00:13:25.255142 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:13:25.291912 extend-filesystems[1296]: Resized partition /dev/vda9 Sep 6 00:13:25.278127 dbus-daemon[1294]: [system] SELinux support is enabled Sep 6 00:13:25.295774 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:13:25.256146 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:13:25.295971 extend-filesystems[1339]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:13:25.297334 tar[1317]: linux-arm64/helm Sep 6 00:13:25.257300 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:13:25.297652 jq[1320]: true Sep 6 00:13:25.278287 systemd[1]: Started dbus.service. Sep 6 00:13:25.280653 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:13:25.280673 systemd[1]: Reached target system-config.target. Sep 6 00:13:25.281446 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:13:25.281464 systemd[1]: Reached target user-config.target. Sep 6 00:13:25.282836 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:13:25.283051 systemd[1]: Finished motdgen.service. Sep 6 00:13:25.317895 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:13:25.326966 update_engine[1313]: I0906 00:13:25.326581 1313 main.cc:92] Flatcar Update Engine starting Sep 6 00:13:25.327868 extend-filesystems[1339]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:13:25.327868 extend-filesystems[1339]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:13:25.327868 extend-filesystems[1339]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:13:25.332292 extend-filesystems[1296]: Resized filesystem in /dev/vda9 Sep 6 00:13:25.328600 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:13:25.333433 update_engine[1313]: I0906 00:13:25.332496 1313 update_check_scheduler.cc:74] Next update check in 6m12s Sep 6 00:13:25.328843 systemd[1]: Finished extend-filesystems.service. Sep 6 00:13:25.332423 systemd[1]: Started update-engine.service. Sep 6 00:13:25.334686 systemd-logind[1305]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:13:25.335433 systemd[1]: Started locksmithd.service. Sep 6 00:13:25.336012 systemd-logind[1305]: New seat seat0. Sep 6 00:13:25.336547 bash[1352]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:13:25.337617 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:13:25.341355 systemd[1]: Started systemd-logind.service. Sep 6 00:13:25.345389 env[1321]: time="2025-09-06T00:13:25.345315328Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:13:25.370509 env[1321]: time="2025-09-06T00:13:25.370466888Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:13:25.370640 env[1321]: time="2025-09-06T00:13:25.370618848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:13:25.371974 env[1321]: time="2025-09-06T00:13:25.371939808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372007 env[1321]: time="2025-09-06T00:13:25.371975688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372221 env[1321]: time="2025-09-06T00:13:25.372196648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372258 env[1321]: time="2025-09-06T00:13:25.372220368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372258 env[1321]: time="2025-09-06T00:13:25.372234208Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:13:25.372258 env[1321]: time="2025-09-06T00:13:25.372244168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372332 env[1321]: time="2025-09-06T00:13:25.372315488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372643 env[1321]: time="2025-09-06T00:13:25.372622888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372811 env[1321]: time="2025-09-06T00:13:25.372789728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:13:25.372847 env[1321]: time="2025-09-06T00:13:25.372811248Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:13:25.372882 env[1321]: time="2025-09-06T00:13:25.372865048Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:13:25.372909 env[1321]: time="2025-09-06T00:13:25.372881048Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:13:25.376188 env[1321]: time="2025-09-06T00:13:25.376160768Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:13:25.376237 env[1321]: time="2025-09-06T00:13:25.376192848Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:13:25.376237 env[1321]: time="2025-09-06T00:13:25.376205968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:13:25.376274 env[1321]: time="2025-09-06T00:13:25.376235448Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376274 env[1321]: time="2025-09-06T00:13:25.376253528Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376274 env[1321]: time="2025-09-06T00:13:25.376267728Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376335 env[1321]: time="2025-09-06T00:13:25.376280648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376695 env[1321]: time="2025-09-06T00:13:25.376668608Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376723 env[1321]: time="2025-09-06T00:13:25.376697928Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376752 env[1321]: time="2025-09-06T00:13:25.376713208Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376774 env[1321]: time="2025-09-06T00:13:25.376757328Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.376801 env[1321]: time="2025-09-06T00:13:25.376774688Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:13:25.376954 env[1321]: time="2025-09-06T00:13:25.376933528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:13:25.377079 env[1321]: time="2025-09-06T00:13:25.377060008Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:13:25.377410 env[1321]: time="2025-09-06T00:13:25.377390048Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:13:25.377441 env[1321]: time="2025-09-06T00:13:25.377422168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377441 env[1321]: time="2025-09-06T00:13:25.377436128Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:13:25.377580 env[1321]: time="2025-09-06T00:13:25.377564768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377606 env[1321]: time="2025-09-06T00:13:25.377582408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377606 env[1321]: time="2025-09-06T00:13:25.377596968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377641 env[1321]: time="2025-09-06T00:13:25.377609088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377641 env[1321]: time="2025-09-06T00:13:25.377620968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377641 env[1321]: time="2025-09-06T00:13:25.377632488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377698 env[1321]: time="2025-09-06T00:13:25.377649168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377698 env[1321]: time="2025-09-06T00:13:25.377660888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377698 env[1321]: time="2025-09-06T00:13:25.377673808Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:13:25.377829 env[1321]: time="2025-09-06T00:13:25.377810568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377855 env[1321]: time="2025-09-06T00:13:25.377832248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377855 env[1321]: time="2025-09-06T00:13:25.377845288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.377894 env[1321]: time="2025-09-06T00:13:25.377857128Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:13:25.377894 env[1321]: time="2025-09-06T00:13:25.377870568Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:13:25.377894 env[1321]: time="2025-09-06T00:13:25.377881848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:13:25.377946 env[1321]: time="2025-09-06T00:13:25.377900488Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:13:25.377946 env[1321]: time="2025-09-06T00:13:25.377933728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:13:25.378159 env[1321]: time="2025-09-06T00:13:25.378108368Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:13:25.378700 env[1321]: time="2025-09-06T00:13:25.378167248Z" level=info msg="Connect containerd service" Sep 6 00:13:25.378700 env[1321]: time="2025-09-06T00:13:25.378200168Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:13:25.379053 env[1321]: time="2025-09-06T00:13:25.379025528Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:13:25.379338 env[1321]: time="2025-09-06T00:13:25.379298768Z" level=info msg="Start subscribing containerd event" Sep 6 00:13:25.379374 env[1321]: time="2025-09-06T00:13:25.379358848Z" level=info msg="Start recovering state" Sep 6 00:13:25.379445 env[1321]: time="2025-09-06T00:13:25.379430208Z" level=info msg="Start event monitor" Sep 6 00:13:25.379472 env[1321]: time="2025-09-06T00:13:25.379454608Z" level=info msg="Start snapshots syncer" Sep 6 00:13:25.379472 env[1321]: time="2025-09-06T00:13:25.379464928Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:13:25.379508 env[1321]: time="2025-09-06T00:13:25.379474968Z" level=info msg="Start streaming server" Sep 6 00:13:25.379508 env[1321]: time="2025-09-06T00:13:25.379321688Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:13:25.379564 env[1321]: time="2025-09-06T00:13:25.379548888Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:13:25.379693 systemd[1]: Started containerd.service. Sep 6 00:13:25.380719 env[1321]: time="2025-09-06T00:13:25.380696288Z" level=info msg="containerd successfully booted in 0.044010s" Sep 6 00:13:25.398124 locksmithd[1355]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:13:25.636073 tar[1317]: linux-arm64/LICENSE Sep 6 00:13:25.636167 tar[1317]: linux-arm64/README.md Sep 6 00:13:25.640491 systemd[1]: Finished prepare-helm.service. Sep 6 00:13:25.791914 systemd-networkd[1099]: eth0: Gained IPv6LL Sep 6 00:13:25.793690 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:13:25.794798 systemd[1]: Reached target network-online.target. Sep 6 00:13:25.796918 systemd[1]: Starting kubelet.service... Sep 6 00:13:26.406446 systemd[1]: Started kubelet.service. Sep 6 00:13:26.423691 sshd_keygen[1311]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:13:26.441040 systemd[1]: Finished sshd-keygen.service. Sep 6 00:13:26.443083 systemd[1]: Starting issuegen.service... Sep 6 00:13:26.447521 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:13:26.447723 systemd[1]: Finished issuegen.service. Sep 6 00:13:26.449772 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:13:26.455108 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:13:26.457077 systemd[1]: Started getty@tty1.service. Sep 6 00:13:26.458890 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 6 00:13:26.459796 systemd[1]: Reached target getty.target. Sep 6 00:13:26.460481 systemd[1]: Reached target multi-user.target. Sep 6 00:13:26.462373 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:13:26.468476 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:13:26.468674 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:13:26.469862 systemd[1]: Startup finished in 5.271s (kernel) + 4.653s (userspace) = 9.925s. Sep 6 00:13:26.806618 kubelet[1378]: E0906 00:13:26.806512 1378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:13:26.808219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:13:26.808348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:13:29.706999 systemd[1]: Created slice system-sshd.slice. Sep 6 00:13:29.708159 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:58718.service. Sep 6 00:13:29.754629 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 58718 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:13:29.756536 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:29.764445 systemd[1]: Created slice user-500.slice. Sep 6 00:13:29.765434 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:13:29.767369 systemd-logind[1305]: New session 1 of user core. Sep 6 00:13:29.773861 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:13:29.775142 systemd[1]: Starting user@500.service... Sep 6 00:13:29.778921 (systemd)[1410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:29.839192 systemd[1410]: Queued start job for default target default.target. Sep 6 00:13:29.839438 systemd[1410]: Reached target paths.target. Sep 6 00:13:29.839454 systemd[1410]: Reached target sockets.target. Sep 6 00:13:29.839464 systemd[1410]: Reached target timers.target. Sep 6 00:13:29.839473 systemd[1410]: Reached target basic.target. Sep 6 00:13:29.839598 systemd[1]: Started user@500.service. Sep 6 00:13:29.840085 systemd[1410]: Reached target default.target. Sep 6 00:13:29.840136 systemd[1410]: Startup finished in 55ms. Sep 6 00:13:29.840478 systemd[1]: Started session-1.scope. Sep 6 00:13:29.892233 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:58722.service. Sep 6 00:13:29.942489 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 58722 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:13:29.944239 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:29.947733 systemd-logind[1305]: New session 2 of user core. Sep 6 00:13:29.948573 systemd[1]: Started session-2.scope. Sep 6 00:13:30.003277 sshd[1419]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:30.005450 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:59606.service. Sep 6 00:13:30.006380 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:58722.service: Deactivated successfully. Sep 6 00:13:30.007327 systemd-logind[1305]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:13:30.007384 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:13:30.008080 systemd-logind[1305]: Removed session 2. Sep 6 00:13:30.047491 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 59606 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:13:30.048812 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:30.052203 systemd-logind[1305]: New session 3 of user core. Sep 6 00:13:30.053014 systemd[1]: Started session-3.scope. Sep 6 00:13:30.103055 sshd[1424]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:30.105209 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:59610.service. Sep 6 00:13:30.105941 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:59606.service: Deactivated successfully. Sep 6 00:13:30.106930 systemd-logind[1305]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:13:30.107102 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:13:30.107807 systemd-logind[1305]: Removed session 3. Sep 6 00:13:30.150068 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 59610 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:13:30.151378 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:30.154976 systemd-logind[1305]: New session 4 of user core. Sep 6 00:13:30.155805 systemd[1]: Started session-4.scope. Sep 6 00:13:30.210763 sshd[1431]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:30.213216 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:59616.service. Sep 6 00:13:30.213728 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:59610.service: Deactivated successfully. Sep 6 00:13:30.214564 systemd-logind[1305]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:13:30.214635 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:13:30.215518 systemd-logind[1305]: Removed session 4. Sep 6 00:13:30.255665 sshd[1439]: Accepted publickey for core from 10.0.0.1 port 59616 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:13:30.257068 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:13:30.260216 systemd-logind[1305]: New session 5 of user core. Sep 6 00:13:30.261029 systemd[1]: Started session-5.scope. Sep 6 00:13:30.317752 sudo[1444]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:13:30.318242 sudo[1444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:13:30.356575 systemd[1]: Starting docker.service... Sep 6 00:13:30.410556 env[1456]: time="2025-09-06T00:13:30.410504008Z" level=info msg="Starting up" Sep 6 00:13:30.411871 env[1456]: time="2025-09-06T00:13:30.411844168Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:13:30.411956 env[1456]: time="2025-09-06T00:13:30.411942088Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:13:30.412017 env[1456]: time="2025-09-06T00:13:30.412002968Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:13:30.412084 env[1456]: time="2025-09-06T00:13:30.412070008Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:13:30.414242 env[1456]: time="2025-09-06T00:13:30.414220328Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 00:13:30.414317 env[1456]: time="2025-09-06T00:13:30.414304528Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 00:13:30.414389 env[1456]: time="2025-09-06T00:13:30.414375008Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 00:13:30.414438 env[1456]: time="2025-09-06T00:13:30.414426328Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 00:13:30.617102 env[1456]: time="2025-09-06T00:13:30.616673928Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 00:13:30.617102 env[1456]: time="2025-09-06T00:13:30.616705168Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 00:13:30.617102 env[1456]: time="2025-09-06T00:13:30.616866208Z" level=info msg="Loading containers: start." Sep 6 00:13:30.756847 kernel: Initializing XFRM netlink socket Sep 6 00:13:30.780784 env[1456]: time="2025-09-06T00:13:30.780729688Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 00:13:30.835343 systemd-networkd[1099]: docker0: Link UP Sep 6 00:13:30.858190 env[1456]: time="2025-09-06T00:13:30.858141368Z" level=info msg="Loading containers: done." Sep 6 00:13:30.873403 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1549791901-merged.mount: Deactivated successfully. Sep 6 00:13:30.876703 env[1456]: time="2025-09-06T00:13:30.876660328Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:13:30.877063 env[1456]: time="2025-09-06T00:13:30.877040328Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 00:13:30.877273 env[1456]: time="2025-09-06T00:13:30.877255048Z" level=info msg="Daemon has completed initialization" Sep 6 00:13:30.895193 systemd[1]: Started docker.service. Sep 6 00:13:30.902844 env[1456]: time="2025-09-06T00:13:30.902707608Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:13:31.448691 env[1321]: time="2025-09-06T00:13:31.448641328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 00:13:32.011047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080544905.mount: Deactivated successfully. Sep 6 00:13:33.230951 env[1321]: time="2025-09-06T00:13:33.230907728Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:33.232353 env[1321]: time="2025-09-06T00:13:33.232318128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:33.234078 env[1321]: time="2025-09-06T00:13:33.234054528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:33.235722 env[1321]: time="2025-09-06T00:13:33.235696248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:33.236630 env[1321]: time="2025-09-06T00:13:33.236600128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 6 00:13:33.237888 env[1321]: time="2025-09-06T00:13:33.237860528Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 00:13:34.604925 env[1321]: time="2025-09-06T00:13:34.604877888Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:34.606882 env[1321]: time="2025-09-06T00:13:34.606840048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:34.609131 env[1321]: time="2025-09-06T00:13:34.609097768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:34.610877 env[1321]: time="2025-09-06T00:13:34.610850208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:34.611608 env[1321]: time="2025-09-06T00:13:34.611564848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 6 00:13:34.612217 env[1321]: time="2025-09-06T00:13:34.612188208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 00:13:35.818977 env[1321]: time="2025-09-06T00:13:35.818930568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:35.820469 env[1321]: time="2025-09-06T00:13:35.820434528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:35.822220 env[1321]: time="2025-09-06T00:13:35.822193848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:35.824094 env[1321]: time="2025-09-06T00:13:35.824070128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:35.824930 env[1321]: time="2025-09-06T00:13:35.824899488Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 6 00:13:35.825416 env[1321]: time="2025-09-06T00:13:35.825395248Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:13:36.883539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554508316.mount: Deactivated successfully. Sep 6 00:13:36.884479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:13:36.884605 systemd[1]: Stopped kubelet.service. Sep 6 00:13:36.886066 systemd[1]: Starting kubelet.service... Sep 6 00:13:36.981105 systemd[1]: Started kubelet.service. Sep 6 00:13:37.032290 kubelet[1595]: E0906 00:13:37.032241 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:13:37.034641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:13:37.034795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:13:37.457422 env[1321]: time="2025-09-06T00:13:37.457374008Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:37.458824 env[1321]: time="2025-09-06T00:13:37.458792408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:37.460842 env[1321]: time="2025-09-06T00:13:37.460811488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:37.462377 env[1321]: time="2025-09-06T00:13:37.462343408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:37.462585 env[1321]: time="2025-09-06T00:13:37.462555408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 00:13:37.463213 env[1321]: time="2025-09-06T00:13:37.463173528Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 00:13:38.112476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3305401808.mount: Deactivated successfully. Sep 6 00:13:39.064681 env[1321]: time="2025-09-06T00:13:39.064633968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.067688 env[1321]: time="2025-09-06T00:13:39.067648088Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.069647 env[1321]: time="2025-09-06T00:13:39.069602168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.071489 env[1321]: time="2025-09-06T00:13:39.071467528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.072369 env[1321]: time="2025-09-06T00:13:39.072341608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 00:13:39.072938 env[1321]: time="2025-09-06T00:13:39.072913088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:13:39.551771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248970079.mount: Deactivated successfully. Sep 6 00:13:39.555853 env[1321]: time="2025-09-06T00:13:39.555806568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.557423 env[1321]: time="2025-09-06T00:13:39.557391008Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.558813 env[1321]: time="2025-09-06T00:13:39.558782208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.560080 env[1321]: time="2025-09-06T00:13:39.560049888Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:39.560517 env[1321]: time="2025-09-06T00:13:39.560485688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:13:39.560995 env[1321]: time="2025-09-06T00:13:39.560972088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 00:13:40.047677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598939728.mount: Deactivated successfully. Sep 6 00:13:42.364556 env[1321]: time="2025-09-06T00:13:42.364505768Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:42.367730 env[1321]: time="2025-09-06T00:13:42.367680528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:42.372719 env[1321]: time="2025-09-06T00:13:42.372662328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:42.375333 env[1321]: time="2025-09-06T00:13:42.375272848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:42.376424 env[1321]: time="2025-09-06T00:13:42.376368368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 6 00:13:47.140964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:13:47.141138 systemd[1]: Stopped kubelet.service. Sep 6 00:13:47.142615 systemd[1]: Starting kubelet.service... Sep 6 00:13:47.240600 systemd[1]: Started kubelet.service. Sep 6 00:13:47.292056 kubelet[1632]: E0906 00:13:47.292002 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:13:47.293815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:13:47.293954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:13:48.199291 systemd[1]: Stopped kubelet.service. Sep 6 00:13:48.201319 systemd[1]: Starting kubelet.service... Sep 6 00:13:48.224494 systemd[1]: Reloading. Sep 6 00:13:48.284677 /usr/lib/systemd/system-generators/torcx-generator[1669]: time="2025-09-06T00:13:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:13:48.285071 /usr/lib/systemd/system-generators/torcx-generator[1669]: time="2025-09-06T00:13:48Z" level=info msg="torcx already run" Sep 6 00:13:48.378810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:13:48.378831 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:13:48.393926 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:13:48.458099 systemd[1]: Started kubelet.service. Sep 6 00:13:48.459696 systemd[1]: Stopping kubelet.service... Sep 6 00:13:48.459986 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:13:48.460212 systemd[1]: Stopped kubelet.service. Sep 6 00:13:48.461849 systemd[1]: Starting kubelet.service... Sep 6 00:13:48.555639 systemd[1]: Started kubelet.service. Sep 6 00:13:48.590718 kubelet[1728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:13:48.591110 kubelet[1728]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:13:48.591161 kubelet[1728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:13:48.591317 kubelet[1728]: I0906 00:13:48.591269 1728 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:13:49.009363 kubelet[1728]: I0906 00:13:49.009329 1728 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:13:49.009512 kubelet[1728]: I0906 00:13:49.009502 1728 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:13:49.009901 kubelet[1728]: I0906 00:13:49.009885 1728 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:13:49.032892 kubelet[1728]: I0906 00:13:49.032857 1728 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:13:49.036140 kubelet[1728]: E0906 00:13:49.035027 1728 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:49.044343 kubelet[1728]: E0906 00:13:49.044311 1728 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:13:49.044457 kubelet[1728]: I0906 00:13:49.044344 1728 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:13:49.049725 kubelet[1728]: I0906 00:13:49.049695 1728 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:13:49.050247 kubelet[1728]: I0906 00:13:49.050233 1728 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:13:49.050626 kubelet[1728]: I0906 00:13:49.050588 1728 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:13:49.050901 kubelet[1728]: I0906 00:13:49.050629 1728 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:13:49.051517 kubelet[1728]: I0906 00:13:49.051391 1728 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:13:49.051560 kubelet[1728]: I0906 00:13:49.051517 1728 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:13:49.052098 kubelet[1728]: I0906 00:13:49.052068 1728 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:13:49.062804 kubelet[1728]: I0906 00:13:49.062762 1728 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:13:49.062804 kubelet[1728]: I0906 00:13:49.062804 1728 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:13:49.062918 kubelet[1728]: I0906 00:13:49.062833 1728 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:13:49.062918 kubelet[1728]: I0906 00:13:49.062849 1728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:13:49.063575 kubelet[1728]: W0906 00:13:49.063504 1728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 6 00:13:49.063906 kubelet[1728]: E0906 00:13:49.063578 1728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:49.063906 kubelet[1728]: W0906 00:13:49.063879 1728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 6 00:13:49.063975 kubelet[1728]: E0906 00:13:49.063930 1728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:49.069804 kubelet[1728]: I0906 00:13:49.069782 1728 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:13:49.072469 kubelet[1728]: I0906 00:13:49.072442 1728 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:13:49.072758 kubelet[1728]: W0906 00:13:49.072747 1728 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:13:49.074658 kubelet[1728]: I0906 00:13:49.074635 1728 server.go:1274] "Started kubelet" Sep 6 00:13:49.075324 kubelet[1728]: I0906 00:13:49.075286 1728 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:13:49.075548 kubelet[1728]: I0906 00:13:49.075510 1728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:13:49.078675 kubelet[1728]: I0906 00:13:49.078646 1728 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:13:49.083880 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:13:49.086806 kubelet[1728]: I0906 00:13:49.086781 1728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:13:49.086981 kubelet[1728]: I0906 00:13:49.086960 1728 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:13:49.087942 kubelet[1728]: E0906 00:13:49.086305 1728 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862892c8d919848 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:13:49.074610248 +0000 UTC m=+0.515279361,LastTimestamp:2025-09-06 00:13:49.074610248 +0000 UTC m=+0.515279361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:13:49.088265 kubelet[1728]: I0906 00:13:49.088229 1728 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:13:49.088832 kubelet[1728]: E0906 00:13:49.088799 1728 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:13:49.088890 kubelet[1728]: I0906 00:13:49.088844 1728 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:13:49.089029 kubelet[1728]: I0906 00:13:49.089007 1728 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:13:49.089081 kubelet[1728]: I0906 00:13:49.089068 1728 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:13:49.089271 kubelet[1728]: E0906 00:13:49.089237 1728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Sep 6 00:13:49.089408 kubelet[1728]: W0906 00:13:49.089364 1728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 6 00:13:49.089455 kubelet[1728]: E0906 00:13:49.089419 1728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:49.089485 kubelet[1728]: E0906 00:13:49.089457 1728 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:13:49.089764 kubelet[1728]: I0906 00:13:49.089742 1728 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:13:49.090141 kubelet[1728]: I0906 00:13:49.090120 1728 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:13:49.092432 kubelet[1728]: I0906 00:13:49.092415 1728 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:13:49.107124 kubelet[1728]: I0906 00:13:49.107051 1728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:13:49.108312 kubelet[1728]: I0906 00:13:49.108272 1728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:13:49.108312 kubelet[1728]: I0906 00:13:49.108305 1728 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:13:49.108406 kubelet[1728]: I0906 00:13:49.108323 1728 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:13:49.108406 kubelet[1728]: E0906 00:13:49.108368 1728 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:13:49.109042 kubelet[1728]: W0906 00:13:49.108980 1728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 6 00:13:49.109130 kubelet[1728]: E0906 00:13:49.109053 1728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:49.111485 kubelet[1728]: I0906 00:13:49.111467 1728 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:13:49.111622 kubelet[1728]: I0906 00:13:49.111609 1728 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:13:49.111699 kubelet[1728]: I0906 00:13:49.111689 1728 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:13:49.185091 kubelet[1728]: I0906 00:13:49.185062 1728 policy_none.go:49] "None policy: Start" Sep 6 00:13:49.186197 kubelet[1728]: I0906 00:13:49.186173 1728 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:13:49.186258 kubelet[1728]: I0906 00:13:49.186225 1728 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:13:49.189149 kubelet[1728]: E0906 00:13:49.189114 1728 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:13:49.192042 kubelet[1728]: I0906 00:13:49.192016 1728 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:13:49.192171 kubelet[1728]: I0906 00:13:49.192158 1728 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:13:49.192200 kubelet[1728]: I0906 00:13:49.192173 1728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:13:49.192729 kubelet[1728]: I0906 00:13:49.192713 1728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:13:49.193381 kubelet[1728]: E0906 00:13:49.193360 1728 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:13:49.290374 kubelet[1728]: I0906 00:13:49.290264 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:13:49.290530 kubelet[1728]: I0906 00:13:49.290512 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0771ea8e8dbcac7968bb9caad278ecfe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0771ea8e8dbcac7968bb9caad278ecfe\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:49.290617 kubelet[1728]: I0906 00:13:49.290600 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0771ea8e8dbcac7968bb9caad278ecfe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0771ea8e8dbcac7968bb9caad278ecfe\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:49.290681 kubelet[1728]: I0906 00:13:49.290670 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:49.290779 kubelet[1728]: I0906 00:13:49.290766 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:49.290847 kubelet[1728]: I0906 00:13:49.290834 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:49.290915 kubelet[1728]: I0906 00:13:49.290901 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:49.290995 kubelet[1728]: I0906 00:13:49.290981 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:49.291084 kubelet[1728]: I0906 00:13:49.291071 1728 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0771ea8e8dbcac7968bb9caad278ecfe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0771ea8e8dbcac7968bb9caad278ecfe\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:49.291168 kubelet[1728]: E0906 00:13:49.290261 1728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Sep 6 00:13:49.294025 kubelet[1728]: I0906 00:13:49.293205 1728 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:13:49.294025 kubelet[1728]: E0906 00:13:49.293635 1728 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 6 00:13:49.495499 kubelet[1728]: I0906 00:13:49.495470 1728 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:13:49.495979 kubelet[1728]: E0906 00:13:49.495953 1728 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 6 00:13:49.514253 kubelet[1728]: E0906 00:13:49.514217 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:49.514939 env[1321]: time="2025-09-06T00:13:49.514891808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0771ea8e8dbcac7968bb9caad278ecfe,Namespace:kube-system,Attempt:0,}" Sep 6 00:13:49.515365 kubelet[1728]: E0906 00:13:49.515345 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:49.516231 env[1321]: time="2025-09-06T00:13:49.515705568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 6 00:13:49.518636 kubelet[1728]: E0906 00:13:49.518551 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:49.519351 env[1321]: time="2025-09-06T00:13:49.518916368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 6 00:13:49.692645 kubelet[1728]: E0906 00:13:49.692592 1728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Sep 6 00:13:49.897446 kubelet[1728]: I0906 00:13:49.897413 1728 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:13:49.898780 kubelet[1728]: E0906 00:13:49.898735 1728 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 6 00:13:50.019672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453489250.mount: Deactivated successfully. Sep 6 00:13:50.032509 env[1321]: time="2025-09-06T00:13:50.031409128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.041527 env[1321]: time="2025-09-06T00:13:50.041224168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.044428 env[1321]: time="2025-09-06T00:13:50.044381568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.047649 env[1321]: time="2025-09-06T00:13:50.047610408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.052145 env[1321]: time="2025-09-06T00:13:50.052093128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.053972 env[1321]: time="2025-09-06T00:13:50.053877648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.056197 env[1321]: time="2025-09-06T00:13:50.056163008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.062829 env[1321]: time="2025-09-06T00:13:50.062800368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.063983 env[1321]: time="2025-09-06T00:13:50.063951808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.065538 env[1321]: time="2025-09-06T00:13:50.065415968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.069443 env[1321]: time="2025-09-06T00:13:50.069407288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.079835 env[1321]: time="2025-09-06T00:13:50.079797328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:13:50.097973 env[1321]: time="2025-09-06T00:13:50.097883888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:13:50.097973 env[1321]: time="2025-09-06T00:13:50.097948408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:13:50.098144 env[1321]: time="2025-09-06T00:13:50.098114208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:13:50.100565 env[1321]: time="2025-09-06T00:13:50.098421608Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36ef7e9045785b56b3ec3807b59af41239b0f431f2787302316a6c9947ebd1b8 pid=1772 runtime=io.containerd.runc.v2 Sep 6 00:13:50.107036 env[1321]: time="2025-09-06T00:13:50.106980848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:13:50.107148 env[1321]: time="2025-09-06T00:13:50.107047088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:13:50.107148 env[1321]: time="2025-09-06T00:13:50.107072448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:13:50.107317 env[1321]: time="2025-09-06T00:13:50.107287608Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bea901602d7888f93ec90470158b20f63162f7e56a86208f5b9a384fca1c7b8 pid=1790 runtime=io.containerd.runc.v2 Sep 6 00:13:50.117944 env[1321]: time="2025-09-06T00:13:50.117868808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:13:50.117944 env[1321]: time="2025-09-06T00:13:50.117912368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:13:50.117944 env[1321]: time="2025-09-06T00:13:50.117922648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:13:50.118990 kubelet[1728]: W0906 00:13:50.118935 1728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 6 00:13:50.119074 kubelet[1728]: E0906 00:13:50.119000 1728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:50.119115 env[1321]: time="2025-09-06T00:13:50.118508528Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7db54081fded72f6a0709c22e2a89544ab9ff74fdc0100285284d2623ea677fc pid=1823 runtime=io.containerd.runc.v2 Sep 6 00:13:50.138378 kubelet[1728]: W0906 00:13:50.138277 1728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 6 00:13:50.138571 kubelet[1728]: E0906 00:13:50.138533 1728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:50.172646 env[1321]: time="2025-09-06T00:13:50.172593248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"36ef7e9045785b56b3ec3807b59af41239b0f431f2787302316a6c9947ebd1b8\"" Sep 6 00:13:50.174091 kubelet[1728]: E0906 00:13:50.173896 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:50.176154 env[1321]: time="2025-09-06T00:13:50.176119128Z" level=info msg="CreateContainer within sandbox \"36ef7e9045785b56b3ec3807b59af41239b0f431f2787302316a6c9947ebd1b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:13:50.181466 env[1321]: time="2025-09-06T00:13:50.181432048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0771ea8e8dbcac7968bb9caad278ecfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"7db54081fded72f6a0709c22e2a89544ab9ff74fdc0100285284d2623ea677fc\"" Sep 6 00:13:50.182194 kubelet[1728]: E0906 00:13:50.182169 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:50.183551 env[1321]: time="2025-09-06T00:13:50.183515928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bea901602d7888f93ec90470158b20f63162f7e56a86208f5b9a384fca1c7b8\"" Sep 6 00:13:50.183855 env[1321]: time="2025-09-06T00:13:50.183802048Z" level=info msg="CreateContainer within sandbox \"7db54081fded72f6a0709c22e2a89544ab9ff74fdc0100285284d2623ea677fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:13:50.184561 kubelet[1728]: E0906 00:13:50.184425 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:50.186141 env[1321]: time="2025-09-06T00:13:50.186106048Z" level=info msg="CreateContainer within sandbox \"6bea901602d7888f93ec90470158b20f63162f7e56a86208f5b9a384fca1c7b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:13:50.195632 env[1321]: time="2025-09-06T00:13:50.195597728Z" level=info msg="CreateContainer within sandbox \"36ef7e9045785b56b3ec3807b59af41239b0f431f2787302316a6c9947ebd1b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f7ee79e9dfa479dac0f2e95db9fce6f2a3ceb7fe7802f807b1c1640f753d45d\"" Sep 6 00:13:50.196422 env[1321]: time="2025-09-06T00:13:50.196392648Z" level=info msg="StartContainer for \"3f7ee79e9dfa479dac0f2e95db9fce6f2a3ceb7fe7802f807b1c1640f753d45d\"" Sep 6 00:13:50.217906 kubelet[1728]: W0906 00:13:50.216890 1728 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 6 00:13:50.217906 kubelet[1728]: E0906 00:13:50.216962 1728 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 6 00:13:50.221757 env[1321]: time="2025-09-06T00:13:50.221674088Z" level=info msg="CreateContainer within sandbox \"7db54081fded72f6a0709c22e2a89544ab9ff74fdc0100285284d2623ea677fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"44943bfa7e36eeb40ce6eb27da577b10994bc1efbc89e312652cf57bda01498e\"" Sep 6 00:13:50.222595 env[1321]: time="2025-09-06T00:13:50.222571888Z" level=info msg="StartContainer for \"44943bfa7e36eeb40ce6eb27da577b10994bc1efbc89e312652cf57bda01498e\"" Sep 6 00:13:50.223823 env[1321]: time="2025-09-06T00:13:50.223786968Z" level=info msg="CreateContainer within sandbox \"6bea901602d7888f93ec90470158b20f63162f7e56a86208f5b9a384fca1c7b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b2dad771243de7a3af6450d0a0b83fdd58ba3ebcc82896603663231a87d160ad\"" Sep 6 00:13:50.224164 env[1321]: time="2025-09-06T00:13:50.224130568Z" level=info msg="StartContainer for \"b2dad771243de7a3af6450d0a0b83fdd58ba3ebcc82896603663231a87d160ad\"" Sep 6 00:13:50.262019 env[1321]: time="2025-09-06T00:13:50.261973848Z" level=info msg="StartContainer for \"3f7ee79e9dfa479dac0f2e95db9fce6f2a3ceb7fe7802f807b1c1640f753d45d\" returns successfully" Sep 6 00:13:50.280295 env[1321]: time="2025-09-06T00:13:50.280147448Z" level=info msg="StartContainer for \"44943bfa7e36eeb40ce6eb27da577b10994bc1efbc89e312652cf57bda01498e\" returns successfully" Sep 6 00:13:50.300672 env[1321]: time="2025-09-06T00:13:50.300631448Z" level=info msg="StartContainer for \"b2dad771243de7a3af6450d0a0b83fdd58ba3ebcc82896603663231a87d160ad\" returns successfully" Sep 6 00:13:50.701273 kubelet[1728]: I0906 00:13:50.701234 1728 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:13:51.114832 kubelet[1728]: E0906 00:13:51.114730 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:51.117590 kubelet[1728]: E0906 00:13:51.117518 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:51.119163 kubelet[1728]: E0906 00:13:51.119077 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:51.635943 kubelet[1728]: E0906 00:13:51.635890 1728 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:13:51.836929 kubelet[1728]: I0906 00:13:51.836889 1728 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:13:52.065756 kubelet[1728]: I0906 00:13:52.065709 1728 apiserver.go:52] "Watching apiserver" Sep 6 00:13:52.089674 kubelet[1728]: I0906 00:13:52.089627 1728 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:13:52.125513 kubelet[1728]: E0906 00:13:52.125465 1728 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:52.125642 kubelet[1728]: E0906 00:13:52.125626 1728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:53.996240 systemd[1]: Reloading. Sep 6 00:13:54.040009 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2025-09-06T00:13:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:13:54.040038 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2025-09-06T00:13:54Z" level=info msg="torcx already run" Sep 6 00:13:54.105616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:13:54.105636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:13:54.123336 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:13:54.191631 kubelet[1728]: I0906 00:13:54.191598 1728 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:13:54.191906 systemd[1]: Stopping kubelet.service... Sep 6 00:13:54.214203 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:13:54.214504 systemd[1]: Stopped kubelet.service. Sep 6 00:13:54.216293 systemd[1]: Starting kubelet.service... Sep 6 00:13:54.317389 systemd[1]: Started kubelet.service. Sep 6 00:13:54.352755 kubelet[2078]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:13:54.352755 kubelet[2078]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:13:54.352755 kubelet[2078]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:13:54.353085 kubelet[2078]: I0906 00:13:54.352816 2078 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:13:54.358645 kubelet[2078]: I0906 00:13:54.358605 2078 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:13:54.358645 kubelet[2078]: I0906 00:13:54.358632 2078 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:13:54.359273 kubelet[2078]: I0906 00:13:54.359245 2078 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:13:54.360579 kubelet[2078]: I0906 00:13:54.360562 2078 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 00:13:54.362435 kubelet[2078]: I0906 00:13:54.362415 2078 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:13:54.365683 kubelet[2078]: E0906 00:13:54.365650 2078 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:13:54.365683 kubelet[2078]: I0906 00:13:54.365682 2078 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:13:54.368001 kubelet[2078]: I0906 00:13:54.367983 2078 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:13:54.368426 kubelet[2078]: I0906 00:13:54.368409 2078 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:13:54.368630 kubelet[2078]: I0906 00:13:54.368598 2078 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:13:54.368853 kubelet[2078]: I0906 00:13:54.368684 2078 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 00:13:54.368975 kubelet[2078]: I0906 00:13:54.368962 2078 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:13:54.369026 kubelet[2078]: I0906 00:13:54.369019 2078 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:13:54.369115 kubelet[2078]: I0906 00:13:54.369105 2078 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:13:54.369278 kubelet[2078]: I0906 00:13:54.369264 2078 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:13:54.369354 kubelet[2078]: I0906 00:13:54.369343 2078 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:13:54.369415 kubelet[2078]: I0906 00:13:54.369406 2078 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:13:54.369470 kubelet[2078]: I0906 00:13:54.369462 2078 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:13:54.370404 kubelet[2078]: I0906 00:13:54.370386 2078 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:13:54.370988 kubelet[2078]: I0906 00:13:54.370968 2078 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:13:54.372537 kubelet[2078]: I0906 00:13:54.372517 2078 server.go:1274] "Started kubelet" Sep 6 00:13:54.372944 kubelet[2078]: I0906 00:13:54.372914 2078 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:13:54.373854 kubelet[2078]: I0906 00:13:54.373823 2078 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:13:54.374101 kubelet[2078]: I0906 00:13:54.374085 2078 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:13:54.375221 kubelet[2078]: I0906 00:13:54.375177 2078 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:13:54.375472 kubelet[2078]: I0906 00:13:54.375457 2078 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:13:54.381442 kubelet[2078]: I0906 00:13:54.380666 2078 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:13:54.381607 kubelet[2078]: I0906 00:13:54.381588 2078 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:13:54.382767 kubelet[2078]: E0906 00:13:54.382098 2078 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:13:54.382767 kubelet[2078]: I0906 00:13:54.382729 2078 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:13:54.382958 kubelet[2078]: I0906 00:13:54.382875 2078 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:13:54.385610 kubelet[2078]: I0906 00:13:54.385581 2078 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:13:54.385841 kubelet[2078]: I0906 00:13:54.385819 2078 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:13:54.400229 kubelet[2078]: I0906 00:13:54.400206 2078 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:13:54.405544 kubelet[2078]: I0906 00:13:54.405452 2078 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:13:54.407033 kubelet[2078]: I0906 00:13:54.406509 2078 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:13:54.407033 kubelet[2078]: I0906 00:13:54.406528 2078 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:13:54.407033 kubelet[2078]: I0906 00:13:54.406550 2078 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:13:54.407033 kubelet[2078]: E0906 00:13:54.406591 2078 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:13:54.414173 kubelet[2078]: E0906 00:13:54.414151 2078 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:13:54.444388 kubelet[2078]: I0906 00:13:54.444361 2078 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:13:54.444388 kubelet[2078]: I0906 00:13:54.444382 2078 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:13:54.444540 kubelet[2078]: I0906 00:13:54.444404 2078 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:13:54.444566 kubelet[2078]: I0906 00:13:54.444540 2078 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:13:54.444566 kubelet[2078]: I0906 00:13:54.444551 2078 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:13:54.444611 kubelet[2078]: I0906 00:13:54.444569 2078 policy_none.go:49] "None policy: Start" Sep 6 00:13:54.445151 kubelet[2078]: I0906 00:13:54.445136 2078 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:13:54.445223 kubelet[2078]: I0906 00:13:54.445159 2078 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:13:54.445343 kubelet[2078]: I0906 00:13:54.445310 2078 state_mem.go:75] "Updated machine memory state" Sep 6 00:13:54.446487 kubelet[2078]: I0906 00:13:54.446456 2078 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:13:54.446646 kubelet[2078]: I0906 00:13:54.446633 2078 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:13:54.446676 kubelet[2078]: I0906 00:13:54.446650 2078 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:13:54.448138 kubelet[2078]: I0906 00:13:54.448115 2078 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:13:54.550710 kubelet[2078]: I0906 00:13:54.550670 2078 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 6 00:13:54.557791 kubelet[2078]: I0906 00:13:54.557764 2078 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 6 00:13:54.557965 kubelet[2078]: I0906 00:13:54.557952 2078 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 6 00:13:54.584794 kubelet[2078]: I0906 00:13:54.583787 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0771ea8e8dbcac7968bb9caad278ecfe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0771ea8e8dbcac7968bb9caad278ecfe\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:54.584794 kubelet[2078]: I0906 00:13:54.583824 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:54.584794 kubelet[2078]: I0906 00:13:54.583846 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:54.584794 kubelet[2078]: I0906 00:13:54.583896 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:54.584794 kubelet[2078]: I0906 00:13:54.583932 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:13:54.584982 kubelet[2078]: I0906 00:13:54.583962 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0771ea8e8dbcac7968bb9caad278ecfe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0771ea8e8dbcac7968bb9caad278ecfe\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:54.584982 kubelet[2078]: I0906 00:13:54.583981 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0771ea8e8dbcac7968bb9caad278ecfe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0771ea8e8dbcac7968bb9caad278ecfe\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:54.584982 kubelet[2078]: I0906 00:13:54.584030 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:54.584982 kubelet[2078]: I0906 00:13:54.584047 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:13:54.815143 kubelet[2078]: E0906 00:13:54.814892 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:54.815143 kubelet[2078]: E0906 00:13:54.814948 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:54.815143 kubelet[2078]: E0906 00:13:54.815073 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:54.992313 sudo[2113]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:13:54.992534 sudo[2113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 00:13:55.370478 kubelet[2078]: I0906 00:13:55.370378 2078 apiserver.go:52] "Watching apiserver" Sep 6 00:13:55.383730 kubelet[2078]: I0906 00:13:55.383700 2078 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:13:55.423490 kubelet[2078]: E0906 00:13:55.423451 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:55.424393 kubelet[2078]: E0906 00:13:55.424353 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:55.433174 kubelet[2078]: E0906 00:13:55.433141 2078 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:13:55.433464 kubelet[2078]: E0906 00:13:55.433445 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:55.452290 sudo[2113]: pam_unix(sudo:session): session closed for user root Sep 6 00:13:55.468920 kubelet[2078]: I0906 00:13:55.468862 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.468844288 podStartE2EDuration="1.468844288s" podCreationTimestamp="2025-09-06 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:13:55.466771808 +0000 UTC m=+1.145436961" watchObservedRunningTime="2025-09-06 00:13:55.468844288 +0000 UTC m=+1.147509441" Sep 6 00:13:55.469063 kubelet[2078]: I0906 00:13:55.468973 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.468969008 podStartE2EDuration="1.468969008s" podCreationTimestamp="2025-09-06 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:13:55.458244928 +0000 UTC m=+1.136910081" watchObservedRunningTime="2025-09-06 00:13:55.468969008 +0000 UTC m=+1.147634161" Sep 6 00:13:55.482189 kubelet[2078]: I0906 00:13:55.482107 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4820921679999999 podStartE2EDuration="1.482092168s" podCreationTimestamp="2025-09-06 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:13:55.475057408 +0000 UTC m=+1.153722641" watchObservedRunningTime="2025-09-06 00:13:55.482092168 +0000 UTC m=+1.160757281" Sep 6 00:13:56.425454 kubelet[2078]: E0906 00:13:56.425422 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:56.966374 sudo[1444]: pam_unix(sudo:session): session closed for user root Sep 6 00:13:56.967781 sshd[1439]: pam_unix(sshd:session): session closed for user core Sep 6 00:13:56.970255 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:59616.service: Deactivated successfully. Sep 6 00:13:56.971183 systemd-logind[1305]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:13:56.971229 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:13:56.971932 systemd-logind[1305]: Removed session 5. Sep 6 00:13:57.430708 kubelet[2078]: E0906 00:13:57.430664 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:13:59.380461 kubelet[2078]: I0906 00:13:59.380430 2078 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:13:59.380844 env[1321]: time="2025-09-06T00:13:59.380720624Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:13:59.381025 kubelet[2078]: I0906 00:13:59.380924 2078 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:14:00.425191 kubelet[2078]: I0906 00:14:00.425143 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-run\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425191 kubelet[2078]: I0906 00:14:00.425190 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-kube-api-access-7z9r7\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425572 kubelet[2078]: I0906 00:14:00.425221 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f-lib-modules\") pod \"kube-proxy-9z88s\" (UID: \"69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f\") " pod="kube-system/kube-proxy-9z88s" Sep 6 00:14:00.425572 kubelet[2078]: I0906 00:14:00.425409 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-net\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425572 kubelet[2078]: I0906 00:14:00.425435 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-lib-modules\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425572 kubelet[2078]: I0906 00:14:00.425453 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-clustermesh-secrets\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425572 kubelet[2078]: I0906 00:14:00.425478 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47gz\" (UniqueName: \"kubernetes.io/projected/69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f-kube-api-access-q47gz\") pod \"kube-proxy-9z88s\" (UID: \"69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f\") " pod="kube-system/kube-proxy-9z88s" Sep 6 00:14:00.425707 kubelet[2078]: I0906 00:14:00.425496 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-bpf-maps\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425707 kubelet[2078]: I0906 00:14:00.425511 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hubble-tls\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425707 kubelet[2078]: I0906 00:14:00.425529 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-cgroup\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425707 kubelet[2078]: I0906 00:14:00.425551 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-etc-cni-netd\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.425707 kubelet[2078]: I0906 00:14:00.425571 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f-kube-proxy\") pod \"kube-proxy-9z88s\" (UID: \"69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f\") " pod="kube-system/kube-proxy-9z88s" Sep 6 00:14:00.425707 kubelet[2078]: I0906 00:14:00.425587 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cni-path\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.426027 kubelet[2078]: I0906 00:14:00.425603 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-kernel\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.426027 kubelet[2078]: I0906 00:14:00.425618 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-xtables-lock\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.426027 kubelet[2078]: I0906 00:14:00.425644 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-config-path\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.426027 kubelet[2078]: I0906 00:14:00.425661 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f-xtables-lock\") pod \"kube-proxy-9z88s\" (UID: \"69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f\") " pod="kube-system/kube-proxy-9z88s" Sep 6 00:14:00.426027 kubelet[2078]: I0906 00:14:00.425675 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hostproc\") pod \"cilium-sx6cw\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " pod="kube-system/cilium-sx6cw" Sep 6 00:14:00.526758 kubelet[2078]: I0906 00:14:00.526707 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6cf33f3-319b-467e-9beb-91a9351f56bd-cilium-config-path\") pod \"cilium-operator-5d85765b45-d9g87\" (UID: \"b6cf33f3-319b-467e-9beb-91a9351f56bd\") " pod="kube-system/cilium-operator-5d85765b45-d9g87" Sep 6 00:14:00.526877 kubelet[2078]: I0906 00:14:00.526835 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/b6cf33f3-319b-467e-9beb-91a9351f56bd-kube-api-access-qp8ps\") pod \"cilium-operator-5d85765b45-d9g87\" (UID: \"b6cf33f3-319b-467e-9beb-91a9351f56bd\") " pod="kube-system/cilium-operator-5d85765b45-d9g87" Sep 6 00:14:00.527077 kubelet[2078]: I0906 00:14:00.527043 2078 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:14:00.612541 kubelet[2078]: E0906 00:14:00.612509 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:00.666399 kubelet[2078]: E0906 00:14:00.666339 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:00.667412 env[1321]: time="2025-09-06T00:14:00.667010650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9z88s,Uid:69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f,Namespace:kube-system,Attempt:0,}" Sep 6 00:14:00.679717 kubelet[2078]: E0906 00:14:00.679620 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:00.680771 env[1321]: time="2025-09-06T00:14:00.680161233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sx6cw,Uid:92bb802e-3560-46ec-9f62-7cd2b4c2aba4,Namespace:kube-system,Attempt:0,}" Sep 6 00:14:00.694288 env[1321]: time="2025-09-06T00:14:00.694196380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:14:00.694288 env[1321]: time="2025-09-06T00:14:00.694248181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:14:00.694288 env[1321]: time="2025-09-06T00:14:00.694261381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:14:00.694663 env[1321]: time="2025-09-06T00:14:00.694613102Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e83fb219cb9cf0c05828b32ab240b1637b69c2f6d8e63f34157db4166ff363d pid=2174 runtime=io.containerd.runc.v2 Sep 6 00:14:00.703829 env[1321]: time="2025-09-06T00:14:00.703758626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:14:00.703829 env[1321]: time="2025-09-06T00:14:00.703796826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:14:00.703829 env[1321]: time="2025-09-06T00:14:00.703807386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:14:00.704187 env[1321]: time="2025-09-06T00:14:00.704124508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9 pid=2197 runtime=io.containerd.runc.v2 Sep 6 00:14:00.746840 env[1321]: time="2025-09-06T00:14:00.746375470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sx6cw,Uid:92bb802e-3560-46ec-9f62-7cd2b4c2aba4,Namespace:kube-system,Attempt:0,} returns sandbox id \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\"" Sep 6 00:14:00.747690 kubelet[2078]: E0906 00:14:00.747086 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:00.748870 env[1321]: time="2025-09-06T00:14:00.748834481Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:14:00.752925 env[1321]: time="2025-09-06T00:14:00.752859821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9z88s,Uid:69ddaf8d-3865-45a8-81a5-5d3d8fc95f9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e83fb219cb9cf0c05828b32ab240b1637b69c2f6d8e63f34157db4166ff363d\"" Sep 6 00:14:00.753817 kubelet[2078]: E0906 00:14:00.753722 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:00.756163 env[1321]: time="2025-09-06T00:14:00.756133116Z" level=info msg="CreateContainer within sandbox \"4e83fb219cb9cf0c05828b32ab240b1637b69c2f6d8e63f34157db4166ff363d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:14:00.762085 kubelet[2078]: E0906 00:14:00.762061 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:00.762663 env[1321]: time="2025-09-06T00:14:00.762622427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d9g87,Uid:b6cf33f3-319b-467e-9beb-91a9351f56bd,Namespace:kube-system,Attempt:0,}" Sep 6 00:14:00.789717 env[1321]: time="2025-09-06T00:14:00.789671837Z" level=info msg="CreateContainer within sandbox \"4e83fb219cb9cf0c05828b32ab240b1637b69c2f6d8e63f34157db4166ff363d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4314e71441dc5b0a34005830854f03ef0f013cc1abec8dc056c1065703883c7\"" Sep 6 00:14:00.791635 env[1321]: time="2025-09-06T00:14:00.791600646Z" level=info msg="StartContainer for \"f4314e71441dc5b0a34005830854f03ef0f013cc1abec8dc056c1065703883c7\"" Sep 6 00:14:00.793176 env[1321]: time="2025-09-06T00:14:00.793046853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:14:00.793176 env[1321]: time="2025-09-06T00:14:00.793083853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:14:00.793176 env[1321]: time="2025-09-06T00:14:00.793094613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:14:00.793321 env[1321]: time="2025-09-06T00:14:00.793270934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c pid=2255 runtime=io.containerd.runc.v2 Sep 6 00:14:00.846468 env[1321]: time="2025-09-06T00:14:00.845662864Z" level=info msg="StartContainer for \"f4314e71441dc5b0a34005830854f03ef0f013cc1abec8dc056c1065703883c7\" returns successfully" Sep 6 00:14:00.848823 env[1321]: time="2025-09-06T00:14:00.848785359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d9g87,Uid:b6cf33f3-319b-467e-9beb-91a9351f56bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c\"" Sep 6 00:14:00.849794 kubelet[2078]: E0906 00:14:00.849637 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:01.445654 kubelet[2078]: E0906 00:14:01.445435 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:01.446125 kubelet[2078]: E0906 00:14:01.445622 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:01.467553 kubelet[2078]: I0906 00:14:01.467379 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9z88s" podStartSLOduration=1.467359777 podStartE2EDuration="1.467359777s" podCreationTimestamp="2025-09-06 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:14:01.467159016 +0000 UTC m=+7.145824169" watchObservedRunningTime="2025-09-06 00:14:01.467359777 +0000 UTC m=+7.146024930" Sep 6 00:14:04.756112 kubelet[2078]: E0906 00:14:04.755926 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:05.017942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2141791256.mount: Deactivated successfully. Sep 6 00:14:05.285795 kubelet[2078]: E0906 00:14:05.285663 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:05.453017 kubelet[2078]: E0906 00:14:05.452952 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:05.453197 kubelet[2078]: E0906 00:14:05.453160 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:07.510228 env[1321]: time="2025-09-06T00:14:07.510154538Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:14:07.512915 env[1321]: time="2025-09-06T00:14:07.512885626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:14:07.514511 env[1321]: time="2025-09-06T00:14:07.514487071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:14:07.515176 env[1321]: time="2025-09-06T00:14:07.515144793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:14:07.517572 env[1321]: time="2025-09-06T00:14:07.517544720Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:14:07.520817 env[1321]: time="2025-09-06T00:14:07.520785170Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:14:07.533570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367692873.mount: Deactivated successfully. Sep 6 00:14:07.535580 env[1321]: time="2025-09-06T00:14:07.535547575Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\"" Sep 6 00:14:07.536936 env[1321]: time="2025-09-06T00:14:07.536907859Z" level=info msg="StartContainer for \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\"" Sep 6 00:14:07.617082 env[1321]: time="2025-09-06T00:14:07.617035223Z" level=info msg="StartContainer for \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\" returns successfully" Sep 6 00:14:07.641900 env[1321]: time="2025-09-06T00:14:07.641857418Z" level=info msg="shim disconnected" id=3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61 Sep 6 00:14:07.642101 env[1321]: time="2025-09-06T00:14:07.642082259Z" level=warning msg="cleaning up after shim disconnected" id=3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61 namespace=k8s.io Sep 6 00:14:07.642157 env[1321]: time="2025-09-06T00:14:07.642142459Z" level=info msg="cleaning up dead shim" Sep 6 00:14:07.648488 env[1321]: time="2025-09-06T00:14:07.648445998Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2504 runtime=io.containerd.runc.v2\n" Sep 6 00:14:08.460714 kubelet[2078]: E0906 00:14:08.460624 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:08.462513 env[1321]: time="2025-09-06T00:14:08.462474588Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:14:08.491967 env[1321]: time="2025-09-06T00:14:08.491145069Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\"" Sep 6 00:14:08.492299 env[1321]: time="2025-09-06T00:14:08.492269593Z" level=info msg="StartContainer for \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\"" Sep 6 00:14:08.532974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61-rootfs.mount: Deactivated successfully. Sep 6 00:14:08.544615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705468719.mount: Deactivated successfully. Sep 6 00:14:08.563582 env[1321]: time="2025-09-06T00:14:08.563544156Z" level=info msg="StartContainer for \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\" returns successfully" Sep 6 00:14:08.569606 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:14:08.569862 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:14:08.570094 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:14:08.571830 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:14:08.574060 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:14:08.589025 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:14:08.619370 env[1321]: time="2025-09-06T00:14:08.619324435Z" level=info msg="shim disconnected" id=1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f Sep 6 00:14:08.619370 env[1321]: time="2025-09-06T00:14:08.619368195Z" level=warning msg="cleaning up after shim disconnected" id=1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f namespace=k8s.io Sep 6 00:14:08.619370 env[1321]: time="2025-09-06T00:14:08.619376955Z" level=info msg="cleaning up dead shim" Sep 6 00:14:08.634398 env[1321]: time="2025-09-06T00:14:08.634354118Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2572 runtime=io.containerd.runc.v2\n" Sep 6 00:14:09.128587 env[1321]: time="2025-09-06T00:14:09.128531705Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:14:09.129857 env[1321]: time="2025-09-06T00:14:09.129820028Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:14:09.131808 env[1321]: time="2025-09-06T00:14:09.131784353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:14:09.132361 env[1321]: time="2025-09-06T00:14:09.132315995Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:14:09.134384 env[1321]: time="2025-09-06T00:14:09.134353600Z" level=info msg="CreateContainer within sandbox \"4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:14:09.144390 env[1321]: time="2025-09-06T00:14:09.144353387Z" level=info msg="CreateContainer within sandbox \"4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\"" Sep 6 00:14:09.145014 env[1321]: time="2025-09-06T00:14:09.144873068Z" level=info msg="StartContainer for \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\"" Sep 6 00:14:09.244065 env[1321]: time="2025-09-06T00:14:09.244022814Z" level=info msg="StartContainer for \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\" returns successfully" Sep 6 00:14:09.463206 kubelet[2078]: E0906 00:14:09.463170 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:09.465599 kubelet[2078]: E0906 00:14:09.465571 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:09.467808 env[1321]: time="2025-09-06T00:14:09.467759692Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:14:09.493663 env[1321]: time="2025-09-06T00:14:09.493605281Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\"" Sep 6 00:14:09.494090 env[1321]: time="2025-09-06T00:14:09.494058962Z" level=info msg="StartContainer for \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\"" Sep 6 00:14:09.514774 kubelet[2078]: I0906 00:14:09.514702 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-d9g87" podStartSLOduration=1.231633706 podStartE2EDuration="9.514685217s" podCreationTimestamp="2025-09-06 00:14:00 +0000 UTC" firstStartedPulling="2025-09-06 00:14:00.850183886 +0000 UTC m=+6.528849039" lastFinishedPulling="2025-09-06 00:14:09.133235397 +0000 UTC m=+14.811900550" observedRunningTime="2025-09-06 00:14:09.482965572 +0000 UTC m=+15.161630765" watchObservedRunningTime="2025-09-06 00:14:09.514685217 +0000 UTC m=+15.193350370" Sep 6 00:14:09.531089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f-rootfs.mount: Deactivated successfully. Sep 6 00:14:09.552925 env[1321]: time="2025-09-06T00:14:09.552786799Z" level=info msg="StartContainer for \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\" returns successfully" Sep 6 00:14:09.593318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02-rootfs.mount: Deactivated successfully. Sep 6 00:14:09.597429 env[1321]: time="2025-09-06T00:14:09.597383518Z" level=info msg="shim disconnected" id=19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02 Sep 6 00:14:09.597429 env[1321]: time="2025-09-06T00:14:09.597424399Z" level=warning msg="cleaning up after shim disconnected" id=19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02 namespace=k8s.io Sep 6 00:14:09.597429 env[1321]: time="2025-09-06T00:14:09.597433479Z" level=info msg="cleaning up dead shim" Sep 6 00:14:09.603969 env[1321]: time="2025-09-06T00:14:09.603922536Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2668 runtime=io.containerd.runc.v2\n" Sep 6 00:14:10.468869 kubelet[2078]: E0906 00:14:10.468836 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:10.469310 kubelet[2078]: E0906 00:14:10.469291 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:10.471035 env[1321]: time="2025-09-06T00:14:10.470991176Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:14:10.484203 env[1321]: time="2025-09-06T00:14:10.484152569Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\"" Sep 6 00:14:10.484788 env[1321]: time="2025-09-06T00:14:10.484759091Z" level=info msg="StartContainer for \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\"" Sep 6 00:14:10.536031 env[1321]: time="2025-09-06T00:14:10.535985899Z" level=info msg="StartContainer for \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\" returns successfully" Sep 6 00:14:10.549580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a-rootfs.mount: Deactivated successfully. Sep 6 00:14:10.552410 env[1321]: time="2025-09-06T00:14:10.552368380Z" level=info msg="shim disconnected" id=34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a Sep 6 00:14:10.552511 env[1321]: time="2025-09-06T00:14:10.552411780Z" level=warning msg="cleaning up after shim disconnected" id=34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a namespace=k8s.io Sep 6 00:14:10.552511 env[1321]: time="2025-09-06T00:14:10.552422180Z" level=info msg="cleaning up dead shim" Sep 6 00:14:10.558807 update_engine[1313]: I0906 00:14:10.558775 1313 update_attempter.cc:509] Updating boot flags... Sep 6 00:14:10.559277 env[1321]: time="2025-09-06T00:14:10.558781756Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:14:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2724 runtime=io.containerd.runc.v2\n" Sep 6 00:14:11.473188 kubelet[2078]: E0906 00:14:11.473155 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:11.475326 env[1321]: time="2025-09-06T00:14:11.475139499Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:14:11.495757 env[1321]: time="2025-09-06T00:14:11.495583147Z" level=info msg="CreateContainer within sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\"" Sep 6 00:14:11.497376 env[1321]: time="2025-09-06T00:14:11.497340551Z" level=info msg="StartContainer for \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\"" Sep 6 00:14:11.551806 env[1321]: time="2025-09-06T00:14:11.550826317Z" level=info msg="StartContainer for \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\" returns successfully" Sep 6 00:14:11.636868 kubelet[2078]: I0906 00:14:11.636831 2078 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:14:11.702779 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:14:11.711155 kubelet[2078]: I0906 00:14:11.711086 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gr4\" (UniqueName: \"kubernetes.io/projected/eb2ead0a-e17b-4505-b3ac-1f4fce3667cc-kube-api-access-f9gr4\") pod \"coredns-7c65d6cfc9-nznrb\" (UID: \"eb2ead0a-e17b-4505-b3ac-1f4fce3667cc\") " pod="kube-system/coredns-7c65d6cfc9-nznrb" Sep 6 00:14:11.711353 kubelet[2078]: I0906 00:14:11.711337 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24gn9\" (UniqueName: \"kubernetes.io/projected/02dcf552-cbc2-4557-87ba-68403470ccf5-kube-api-access-24gn9\") pod \"coredns-7c65d6cfc9-q7sbt\" (UID: \"02dcf552-cbc2-4557-87ba-68403470ccf5\") " pod="kube-system/coredns-7c65d6cfc9-q7sbt" Sep 6 00:14:11.711483 kubelet[2078]: I0906 00:14:11.711465 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb2ead0a-e17b-4505-b3ac-1f4fce3667cc-config-volume\") pod \"coredns-7c65d6cfc9-nznrb\" (UID: \"eb2ead0a-e17b-4505-b3ac-1f4fce3667cc\") " pod="kube-system/coredns-7c65d6cfc9-nznrb" Sep 6 00:14:11.711592 kubelet[2078]: I0906 00:14:11.711577 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02dcf552-cbc2-4557-87ba-68403470ccf5-config-volume\") pod \"coredns-7c65d6cfc9-q7sbt\" (UID: \"02dcf552-cbc2-4557-87ba-68403470ccf5\") " pod="kube-system/coredns-7c65d6cfc9-q7sbt" Sep 6 00:14:11.928773 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:14:11.972185 kubelet[2078]: E0906 00:14:11.972143 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:11.972573 kubelet[2078]: E0906 00:14:11.972554 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:11.974153 env[1321]: time="2025-09-06T00:14:11.974042632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q7sbt,Uid:02dcf552-cbc2-4557-87ba-68403470ccf5,Namespace:kube-system,Attempt:0,}" Sep 6 00:14:11.974629 env[1321]: time="2025-09-06T00:14:11.974601353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nznrb,Uid:eb2ead0a-e17b-4505-b3ac-1f4fce3667cc,Namespace:kube-system,Attempt:0,}" Sep 6 00:14:12.477439 kubelet[2078]: E0906 00:14:12.477405 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:13.480733 kubelet[2078]: E0906 00:14:13.479459 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:13.557839 systemd-networkd[1099]: cilium_host: Link UP Sep 6 00:14:13.557995 systemd-networkd[1099]: cilium_net: Link UP Sep 6 00:14:13.557997 systemd-networkd[1099]: cilium_net: Gained carrier Sep 6 00:14:13.558119 systemd-networkd[1099]: cilium_host: Gained carrier Sep 6 00:14:13.561545 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:14:13.560018 systemd-networkd[1099]: cilium_host: Gained IPv6LL Sep 6 00:14:13.646881 systemd-networkd[1099]: cilium_vxlan: Link UP Sep 6 00:14:13.646888 systemd-networkd[1099]: cilium_vxlan: Gained carrier Sep 6 00:14:13.839897 systemd-networkd[1099]: cilium_net: Gained IPv6LL Sep 6 00:14:13.909763 kernel: NET: Registered PF_ALG protocol family Sep 6 00:14:14.481424 kubelet[2078]: E0906 00:14:14.481394 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:14.512620 systemd-networkd[1099]: lxc_health: Link UP Sep 6 00:14:14.521591 systemd-networkd[1099]: lxc_health: Gained carrier Sep 6 00:14:14.521765 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:14:14.700906 kubelet[2078]: I0906 00:14:14.700852 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sx6cw" podStartSLOduration=7.931644078 podStartE2EDuration="14.70083616s" podCreationTimestamp="2025-09-06 00:14:00 +0000 UTC" firstStartedPulling="2025-09-06 00:14:00.748206718 +0000 UTC m=+6.426871831" lastFinishedPulling="2025-09-06 00:14:07.51739876 +0000 UTC m=+13.196063913" observedRunningTime="2025-09-06 00:14:12.495945386 +0000 UTC m=+18.174610539" watchObservedRunningTime="2025-09-06 00:14:14.70083616 +0000 UTC m=+20.379501313" Sep 6 00:14:14.815905 systemd-networkd[1099]: cilium_vxlan: Gained IPv6LL Sep 6 00:14:15.022153 systemd-networkd[1099]: lxc2fd3b2f76bd9: Link UP Sep 6 00:14:15.039496 systemd-networkd[1099]: lxcea58575cdb38: Link UP Sep 6 00:14:15.051769 kernel: eth0: renamed from tmp8a3f9 Sep 6 00:14:15.059880 kernel: eth0: renamed from tmpad28b Sep 6 00:14:15.068876 systemd-networkd[1099]: lxcea58575cdb38: Gained carrier Sep 6 00:14:15.069671 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:14:15.069719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcea58575cdb38: link becomes ready Sep 6 00:14:15.069751 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2fd3b2f76bd9: link becomes ready Sep 6 00:14:15.069802 systemd-networkd[1099]: lxc2fd3b2f76bd9: Gained carrier Sep 6 00:14:15.483205 kubelet[2078]: E0906 00:14:15.483164 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:15.711929 systemd-networkd[1099]: lxc_health: Gained IPv6LL Sep 6 00:14:16.484982 kubelet[2078]: E0906 00:14:16.484953 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:16.607927 systemd-networkd[1099]: lxc2fd3b2f76bd9: Gained IPv6LL Sep 6 00:14:16.799881 systemd-networkd[1099]: lxcea58575cdb38: Gained IPv6LL Sep 6 00:14:17.486718 kubelet[2078]: E0906 00:14:17.486689 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:18.788485 env[1321]: time="2025-09-06T00:14:18.788408833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:14:18.788485 env[1321]: time="2025-09-06T00:14:18.788456033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:14:18.788950 env[1321]: time="2025-09-06T00:14:18.788910673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:14:18.789216 env[1321]: time="2025-09-06T00:14:18.789173274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad28bcc4f2091f907f4f407a237a38ca3dc22762940367865ff1ccf551860314 pid=3299 runtime=io.containerd.runc.v2 Sep 6 00:14:18.799987 env[1321]: time="2025-09-06T00:14:18.799901730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:14:18.800079 env[1321]: time="2025-09-06T00:14:18.800000250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:14:18.800079 env[1321]: time="2025-09-06T00:14:18.800026250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:14:18.800872 env[1321]: time="2025-09-06T00:14:18.800831091Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a3f9a58b86fcd4d9135b10e5b10ea1529fe3532c7b8ed6174e94625c5afda62 pid=3318 runtime=io.containerd.runc.v2 Sep 6 00:14:18.827655 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:14:18.837067 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:14:18.848346 env[1321]: time="2025-09-06T00:14:18.848302322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q7sbt,Uid:02dcf552-cbc2-4557-87ba-68403470ccf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad28bcc4f2091f907f4f407a237a38ca3dc22762940367865ff1ccf551860314\"" Sep 6 00:14:18.849153 kubelet[2078]: E0906 00:14:18.849131 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:18.851400 env[1321]: time="2025-09-06T00:14:18.851045686Z" level=info msg="CreateContainer within sandbox \"ad28bcc4f2091f907f4f407a237a38ca3dc22762940367865ff1ccf551860314\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:14:18.859391 env[1321]: time="2025-09-06T00:14:18.859353819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nznrb,Uid:eb2ead0a-e17b-4505-b3ac-1f4fce3667cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a3f9a58b86fcd4d9135b10e5b10ea1529fe3532c7b8ed6174e94625c5afda62\"" Sep 6 00:14:18.860424 kubelet[2078]: E0906 00:14:18.860394 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:18.862140 env[1321]: time="2025-09-06T00:14:18.862110823Z" level=info msg="CreateContainer within sandbox \"8a3f9a58b86fcd4d9135b10e5b10ea1529fe3532c7b8ed6174e94625c5afda62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:14:18.877687 env[1321]: time="2025-09-06T00:14:18.877636526Z" level=info msg="CreateContainer within sandbox \"ad28bcc4f2091f907f4f407a237a38ca3dc22762940367865ff1ccf551860314\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd2143ea68743eb4c289230871a0da0e7810964b98c8f83f97999148d660f147\"" Sep 6 00:14:18.878137 env[1321]: time="2025-09-06T00:14:18.878112047Z" level=info msg="StartContainer for \"dd2143ea68743eb4c289230871a0da0e7810964b98c8f83f97999148d660f147\"" Sep 6 00:14:18.880974 env[1321]: time="2025-09-06T00:14:18.880938851Z" level=info msg="CreateContainer within sandbox \"8a3f9a58b86fcd4d9135b10e5b10ea1529fe3532c7b8ed6174e94625c5afda62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cddc61ff1bd51996d0cf9939e95b4f08280c48025b2c6c59e1adcebd6025c580\"" Sep 6 00:14:18.881814 env[1321]: time="2025-09-06T00:14:18.881790852Z" level=info msg="StartContainer for \"cddc61ff1bd51996d0cf9939e95b4f08280c48025b2c6c59e1adcebd6025c580\"" Sep 6 00:14:18.937378 env[1321]: time="2025-09-06T00:14:18.937309455Z" level=info msg="StartContainer for \"dd2143ea68743eb4c289230871a0da0e7810964b98c8f83f97999148d660f147\" returns successfully" Sep 6 00:14:18.938034 env[1321]: time="2025-09-06T00:14:18.937859496Z" level=info msg="StartContainer for \"cddc61ff1bd51996d0cf9939e95b4f08280c48025b2c6c59e1adcebd6025c580\" returns successfully" Sep 6 00:14:19.496156 kubelet[2078]: E0906 00:14:19.496118 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:19.499107 kubelet[2078]: E0906 00:14:19.499065 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:19.523941 kubelet[2078]: I0906 00:14:19.523873 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q7sbt" podStartSLOduration=19.523849724 podStartE2EDuration="19.523849724s" podCreationTimestamp="2025-09-06 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:14:19.507416581 +0000 UTC m=+25.186081774" watchObservedRunningTime="2025-09-06 00:14:19.523849724 +0000 UTC m=+25.202514877" Sep 6 00:14:19.524374 kubelet[2078]: I0906 00:14:19.524327 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nznrb" podStartSLOduration=19.524319445 podStartE2EDuration="19.524319445s" podCreationTimestamp="2025-09-06 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:14:19.523589924 +0000 UTC m=+25.202255077" watchObservedRunningTime="2025-09-06 00:14:19.524319445 +0000 UTC m=+25.202984598" Sep 6 00:14:19.795287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276457257.mount: Deactivated successfully. Sep 6 00:14:20.500364 kubelet[2078]: E0906 00:14:20.500326 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:20.500715 kubelet[2078]: E0906 00:14:20.500418 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:21.501953 kubelet[2078]: E0906 00:14:21.501907 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:21.505861 kubelet[2078]: E0906 00:14:21.505829 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:14:25.304408 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:48362.service. Sep 6 00:14:25.360321 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 48362 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:25.364234 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:25.370632 systemd-logind[1305]: New session 6 of user core. Sep 6 00:14:25.371107 systemd[1]: Started session-6.scope. Sep 6 00:14:25.512982 sshd[3459]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:25.517920 systemd-logind[1305]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:14:25.518083 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:48362.service: Deactivated successfully. Sep 6 00:14:25.519094 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:14:25.519516 systemd-logind[1305]: Removed session 6. Sep 6 00:14:30.515494 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:54696.service. Sep 6 00:14:30.568938 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 54696 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:30.570944 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:30.576689 systemd-logind[1305]: New session 7 of user core. Sep 6 00:14:30.577413 systemd[1]: Started session-7.scope. Sep 6 00:14:30.714400 sshd[3476]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:30.717094 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:54696.service: Deactivated successfully. Sep 6 00:14:30.718360 systemd-logind[1305]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:14:30.718379 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:14:30.719317 systemd-logind[1305]: Removed session 7. Sep 6 00:14:35.716943 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:54706.service. Sep 6 00:14:35.767342 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 54706 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:35.772359 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:35.784866 systemd-logind[1305]: New session 8 of user core. Sep 6 00:14:35.785285 systemd[1]: Started session-8.scope. Sep 6 00:14:35.908811 sshd[3493]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:35.911418 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:54706.service: Deactivated successfully. Sep 6 00:14:35.912365 systemd-logind[1305]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:14:35.912409 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:14:35.914041 systemd-logind[1305]: Removed session 8. Sep 6 00:14:40.911965 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:48700.service. Sep 6 00:14:40.957756 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:40.959065 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:40.963057 systemd-logind[1305]: New session 9 of user core. Sep 6 00:14:40.963855 systemd[1]: Started session-9.scope. Sep 6 00:14:41.089199 sshd[3508]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:41.090402 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:48710.service. Sep 6 00:14:41.091550 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:48700.service: Deactivated successfully. Sep 6 00:14:41.092398 systemd-logind[1305]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:14:41.092457 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:14:41.093155 systemd-logind[1305]: Removed session 9. Sep 6 00:14:41.134810 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 48710 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:41.136370 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:41.140172 systemd-logind[1305]: New session 10 of user core. Sep 6 00:14:41.141186 systemd[1]: Started session-10.scope. Sep 6 00:14:41.285942 sshd[3522]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:41.291935 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:48722.service. Sep 6 00:14:41.292672 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:48710.service: Deactivated successfully. Sep 6 00:14:41.293604 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:14:41.295954 systemd-logind[1305]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:14:41.303992 systemd-logind[1305]: Removed session 10. Sep 6 00:14:41.339421 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 48722 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:41.341046 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:41.344378 systemd-logind[1305]: New session 11 of user core. Sep 6 00:14:41.345181 systemd[1]: Started session-11.scope. Sep 6 00:14:41.464410 sshd[3535]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:41.466682 systemd-logind[1305]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:14:41.466906 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:48722.service: Deactivated successfully. Sep 6 00:14:41.467655 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:14:41.468049 systemd-logind[1305]: Removed session 11. Sep 6 00:14:46.467456 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:48730.service. Sep 6 00:14:46.512827 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 48730 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:46.514227 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:46.519701 systemd-logind[1305]: New session 12 of user core. Sep 6 00:14:46.523452 systemd[1]: Started session-12.scope. Sep 6 00:14:46.637680 sshd[3551]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:46.642000 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:48730.service: Deactivated successfully. Sep 6 00:14:46.643287 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:14:46.643347 systemd-logind[1305]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:14:46.645681 systemd-logind[1305]: Removed session 12. Sep 6 00:14:51.640195 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:50272.service. Sep 6 00:14:51.689865 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 50272 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:51.691074 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:51.695096 systemd-logind[1305]: New session 13 of user core. Sep 6 00:14:51.695888 systemd[1]: Started session-13.scope. Sep 6 00:14:51.831716 sshd[3565]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:51.834042 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:50272.service: Deactivated successfully. Sep 6 00:14:51.835202 systemd-logind[1305]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:14:51.835280 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:14:51.836050 systemd-logind[1305]: Removed session 13. Sep 6 00:14:56.834709 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:50284.service. Sep 6 00:14:56.887110 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:56.887643 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:56.897790 systemd-logind[1305]: New session 14 of user core. Sep 6 00:14:56.900789 systemd[1]: Started session-14.scope. Sep 6 00:14:57.058225 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:50290.service. Sep 6 00:14:57.060130 sshd[3583]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:57.062343 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:50284.service: Deactivated successfully. Sep 6 00:14:57.063255 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:14:57.063612 systemd-logind[1305]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:14:57.066656 systemd-logind[1305]: Removed session 14. Sep 6 00:14:57.108758 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 50290 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:57.110511 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:57.114902 systemd-logind[1305]: New session 15 of user core. Sep 6 00:14:57.115450 systemd[1]: Started session-15.scope. Sep 6 00:14:57.365468 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:50294.service. Sep 6 00:14:57.366315 sshd[3598]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:57.368507 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:50290.service: Deactivated successfully. Sep 6 00:14:57.369432 systemd-logind[1305]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:14:57.369481 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:14:57.370651 systemd-logind[1305]: Removed session 15. Sep 6 00:14:57.422548 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 50294 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:57.423899 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:57.430004 systemd-logind[1305]: New session 16 of user core. Sep 6 00:14:57.430280 systemd[1]: Started session-16.scope. Sep 6 00:14:58.713731 sshd[3610]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:58.716073 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:50302.service. Sep 6 00:14:58.720039 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:50294.service: Deactivated successfully. Sep 6 00:14:58.721486 systemd-logind[1305]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:14:58.721525 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:14:58.724498 systemd-logind[1305]: Removed session 16. Sep 6 00:14:58.779405 sshd[3629]: Accepted publickey for core from 10.0.0.1 port 50302 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:58.781033 sshd[3629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:58.784255 systemd-logind[1305]: New session 17 of user core. Sep 6 00:14:58.785136 systemd[1]: Started session-17.scope. Sep 6 00:14:59.017380 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:50312.service. Sep 6 00:14:59.014841 sshd[3629]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:59.025084 systemd-logind[1305]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:14:59.025287 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:50302.service: Deactivated successfully. Sep 6 00:14:59.026134 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:14:59.026527 systemd-logind[1305]: Removed session 17. Sep 6 00:14:59.062365 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 50312 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:14:59.063917 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:14:59.067391 systemd-logind[1305]: New session 18 of user core. Sep 6 00:14:59.068331 systemd[1]: Started session-18.scope. Sep 6 00:14:59.182642 sshd[3643]: pam_unix(sshd:session): session closed for user core Sep 6 00:14:59.184974 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:50312.service: Deactivated successfully. Sep 6 00:14:59.185885 systemd-logind[1305]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:14:59.185956 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:14:59.186806 systemd-logind[1305]: Removed session 18. Sep 6 00:15:04.185804 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:43286.service. Sep 6 00:15:04.229040 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 43286 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:04.230255 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:04.233631 systemd-logind[1305]: New session 19 of user core. Sep 6 00:15:04.234482 systemd[1]: Started session-19.scope. Sep 6 00:15:04.341882 sshd[3665]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:04.344496 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:43286.service: Deactivated successfully. Sep 6 00:15:04.345475 systemd-logind[1305]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:15:04.345517 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:15:04.346305 systemd-logind[1305]: Removed session 19. Sep 6 00:15:04.408145 kubelet[2078]: E0906 00:15:04.408113 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:09.344508 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:43290.service. Sep 6 00:15:09.387026 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 43290 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:09.388710 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:09.393359 systemd-logind[1305]: New session 20 of user core. Sep 6 00:15:09.393905 systemd[1]: Started session-20.scope. Sep 6 00:15:09.506686 sshd[3679]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:09.509111 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:43290.service: Deactivated successfully. Sep 6 00:15:09.510132 systemd-logind[1305]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:15:09.510206 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:15:09.511145 systemd-logind[1305]: Removed session 20. Sep 6 00:15:13.408013 kubelet[2078]: E0906 00:15:13.407980 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:14.408168 kubelet[2078]: E0906 00:15:14.407984 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:14.512343 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:40154.service. Sep 6 00:15:14.558656 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 40154 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:14.560015 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:14.564489 systemd-logind[1305]: New session 21 of user core. Sep 6 00:15:14.565726 systemd[1]: Started session-21.scope. Sep 6 00:15:14.680974 sshd[3695]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:14.683869 systemd-logind[1305]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:15:14.684596 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:40154.service: Deactivated successfully. Sep 6 00:15:14.685425 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:15:14.686354 systemd-logind[1305]: Removed session 21. Sep 6 00:15:19.684355 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:40158.service. Sep 6 00:15:19.731427 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 40158 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:19.733688 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:19.739309 systemd-logind[1305]: New session 22 of user core. Sep 6 00:15:19.739669 systemd[1]: Started session-22.scope. Sep 6 00:15:19.856617 sshd[3709]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:19.859081 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:40166.service. Sep 6 00:15:19.860225 systemd-logind[1305]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:15:19.860731 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:40158.service: Deactivated successfully. Sep 6 00:15:19.861527 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:15:19.861999 systemd-logind[1305]: Removed session 22. Sep 6 00:15:19.902177 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 40166 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:19.903420 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:19.906775 systemd-logind[1305]: New session 23 of user core. Sep 6 00:15:19.907800 systemd[1]: Started session-23.scope. Sep 6 00:15:20.408187 kubelet[2078]: E0906 00:15:20.408154 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:21.407698 kubelet[2078]: E0906 00:15:21.407661 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:22.262952 systemd[1]: run-containerd-runc-k8s.io-2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0-runc.7EoNmD.mount: Deactivated successfully. Sep 6 00:15:22.271302 env[1321]: time="2025-09-06T00:15:22.271250007Z" level=info msg="StopContainer for \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\" with timeout 30 (s)" Sep 6 00:15:22.273099 env[1321]: time="2025-09-06T00:15:22.273058018Z" level=info msg="Stop container \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\" with signal terminated" Sep 6 00:15:22.288693 env[1321]: time="2025-09-06T00:15:22.288626152Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:15:22.296094 env[1321]: time="2025-09-06T00:15:22.296056717Z" level=info msg="StopContainer for \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\" with timeout 2 (s)" Sep 6 00:15:22.296385 env[1321]: time="2025-09-06T00:15:22.296356239Z" level=info msg="Stop container \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\" with signal terminated" Sep 6 00:15:22.302334 systemd-networkd[1099]: lxc_health: Link DOWN Sep 6 00:15:22.302340 systemd-networkd[1099]: lxc_health: Lost carrier Sep 6 00:15:22.304463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a-rootfs.mount: Deactivated successfully. Sep 6 00:15:22.311149 env[1321]: time="2025-09-06T00:15:22.311091488Z" level=info msg="shim disconnected" id=c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a Sep 6 00:15:22.311149 env[1321]: time="2025-09-06T00:15:22.311150408Z" level=warning msg="cleaning up after shim disconnected" id=c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a namespace=k8s.io Sep 6 00:15:22.311353 env[1321]: time="2025-09-06T00:15:22.311159488Z" level=info msg="cleaning up dead shim" Sep 6 00:15:22.321089 env[1321]: time="2025-09-06T00:15:22.321047668Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3778 runtime=io.containerd.runc.v2\n" Sep 6 00:15:22.327480 env[1321]: time="2025-09-06T00:15:22.327428146Z" level=info msg="StopContainer for \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\" returns successfully" Sep 6 00:15:22.328695 env[1321]: time="2025-09-06T00:15:22.328082870Z" level=info msg="StopPodSandbox for \"4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c\"" Sep 6 00:15:22.328695 env[1321]: time="2025-09-06T00:15:22.328155191Z" level=info msg="Container to stop \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:15:22.330097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c-shm.mount: Deactivated successfully. Sep 6 00:15:22.346000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0-rootfs.mount: Deactivated successfully. Sep 6 00:15:22.353045 env[1321]: time="2025-09-06T00:15:22.352966221Z" level=info msg="shim disconnected" id=2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0 Sep 6 00:15:22.353045 env[1321]: time="2025-09-06T00:15:22.353012781Z" level=warning msg="cleaning up after shim disconnected" id=2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0 namespace=k8s.io Sep 6 00:15:22.353045 env[1321]: time="2025-09-06T00:15:22.353023501Z" level=info msg="cleaning up dead shim" Sep 6 00:15:22.359455 env[1321]: time="2025-09-06T00:15:22.359406900Z" level=info msg="shim disconnected" id=4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c Sep 6 00:15:22.359881 env[1321]: time="2025-09-06T00:15:22.359689941Z" level=warning msg="cleaning up after shim disconnected" id=4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c namespace=k8s.io Sep 6 00:15:22.360081 env[1321]: time="2025-09-06T00:15:22.360063184Z" level=info msg="cleaning up dead shim" Sep 6 00:15:22.361966 env[1321]: time="2025-09-06T00:15:22.361922835Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3828 runtime=io.containerd.runc.v2\n" Sep 6 00:15:22.363998 env[1321]: time="2025-09-06T00:15:22.363962487Z" level=info msg="StopContainer for \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\" returns successfully" Sep 6 00:15:22.364516 env[1321]: time="2025-09-06T00:15:22.364481730Z" level=info msg="StopPodSandbox for \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\"" Sep 6 00:15:22.364603 env[1321]: time="2025-09-06T00:15:22.364579931Z" level=info msg="Container to stop \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:15:22.364637 env[1321]: time="2025-09-06T00:15:22.364604691Z" level=info msg="Container to stop \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:15:22.364717 env[1321]: time="2025-09-06T00:15:22.364684611Z" level=info msg="Container to stop \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:15:22.364776 env[1321]: time="2025-09-06T00:15:22.364719332Z" level=info msg="Container to stop \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:15:22.364776 env[1321]: time="2025-09-06T00:15:22.364733212Z" level=info msg="Container to stop \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:15:22.367422 env[1321]: time="2025-09-06T00:15:22.367386508Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3842 runtime=io.containerd.runc.v2\n" Sep 6 00:15:22.367834 env[1321]: time="2025-09-06T00:15:22.367804310Z" level=info msg="TearDown network for sandbox \"4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c\" successfully" Sep 6 00:15:22.367938 env[1321]: time="2025-09-06T00:15:22.367918071Z" level=info msg="StopPodSandbox for \"4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c\" returns successfully" Sep 6 00:15:22.395158 env[1321]: time="2025-09-06T00:15:22.395098835Z" level=info msg="shim disconnected" id=99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9 Sep 6 00:15:22.395158 env[1321]: time="2025-09-06T00:15:22.395157636Z" level=warning msg="cleaning up after shim disconnected" id=99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9 namespace=k8s.io Sep 6 00:15:22.395350 env[1321]: time="2025-09-06T00:15:22.395169156Z" level=info msg="cleaning up dead shim" Sep 6 00:15:22.402512 env[1321]: time="2025-09-06T00:15:22.402470640Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3875 runtime=io.containerd.runc.v2\n" Sep 6 00:15:22.402823 env[1321]: time="2025-09-06T00:15:22.402794762Z" level=info msg="TearDown network for sandbox \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" successfully" Sep 6 00:15:22.402863 env[1321]: time="2025-09-06T00:15:22.402824322Z" level=info msg="StopPodSandbox for \"99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9\" returns successfully" Sep 6 00:15:22.453608 kubelet[2078]: I0906 00:15:22.453558 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-lib-modules\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.453608 kubelet[2078]: I0906 00:15:22.453601 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-kernel\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454072 kubelet[2078]: I0906 00:15:22.453632 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-config-path\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454072 kubelet[2078]: I0906 00:15:22.453648 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-net\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454072 kubelet[2078]: I0906 00:15:22.453666 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-etc-cni-netd\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454072 kubelet[2078]: I0906 00:15:22.453685 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-kube-api-access-7z9r7\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454072 kubelet[2078]: I0906 00:15:22.453700 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hostproc\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454072 kubelet[2078]: I0906 00:15:22.453715 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-cgroup\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454336 kubelet[2078]: I0906 00:15:22.453729 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-xtables-lock\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454336 kubelet[2078]: I0906 00:15:22.453759 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-run\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454336 kubelet[2078]: I0906 00:15:22.453777 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-clustermesh-secrets\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454336 kubelet[2078]: I0906 00:15:22.453791 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-bpf-maps\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454336 kubelet[2078]: I0906 00:15:22.453806 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cni-path\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454336 kubelet[2078]: I0906 00:15:22.453856 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/b6cf33f3-319b-467e-9beb-91a9351f56bd-kube-api-access-qp8ps\") pod \"b6cf33f3-319b-467e-9beb-91a9351f56bd\" (UID: \"b6cf33f3-319b-467e-9beb-91a9351f56bd\") " Sep 6 00:15:22.454481 kubelet[2078]: I0906 00:15:22.453878 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hubble-tls\") pod \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\" (UID: \"92bb802e-3560-46ec-9f62-7cd2b4c2aba4\") " Sep 6 00:15:22.454481 kubelet[2078]: I0906 00:15:22.453895 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6cf33f3-319b-467e-9beb-91a9351f56bd-cilium-config-path\") pod \"b6cf33f3-319b-467e-9beb-91a9351f56bd\" (UID: \"b6cf33f3-319b-467e-9beb-91a9351f56bd\") " Sep 6 00:15:22.473164 kubelet[2078]: I0906 00:15:22.472952 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.473164 kubelet[2078]: I0906 00:15:22.473031 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.474363 kubelet[2078]: I0906 00:15:22.474297 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cf33f3-319b-467e-9beb-91a9351f56bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6cf33f3-319b-467e-9beb-91a9351f56bd" (UID: "b6cf33f3-319b-467e-9beb-91a9351f56bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:15:22.474363 kubelet[2078]: I0906 00:15:22.474359 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.475725 kubelet[2078]: I0906 00:15:22.475680 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.475725 kubelet[2078]: I0906 00:15:22.475726 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cni-path" (OuterVolumeSpecName: "cni-path") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.475858 kubelet[2078]: I0906 00:15:22.475734 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-kube-api-access-7z9r7" (OuterVolumeSpecName: "kube-api-access-7z9r7") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "kube-api-access-7z9r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:15:22.475858 kubelet[2078]: I0906 00:15:22.475761 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.476117 kubelet[2078]: I0906 00:15:22.475551 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:15:22.476166 kubelet[2078]: I0906 00:15:22.476130 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.476197 kubelet[2078]: I0906 00:15:22.476165 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hostproc" (OuterVolumeSpecName: "hostproc") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.476197 kubelet[2078]: I0906 00:15:22.476184 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.476243 kubelet[2078]: I0906 00:15:22.476199 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:22.478418 kubelet[2078]: I0906 00:15:22.478286 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:15:22.479419 kubelet[2078]: I0906 00:15:22.478919 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cf33f3-319b-467e-9beb-91a9351f56bd-kube-api-access-qp8ps" (OuterVolumeSpecName: "kube-api-access-qp8ps") pod "b6cf33f3-319b-467e-9beb-91a9351f56bd" (UID: "b6cf33f3-319b-467e-9beb-91a9351f56bd"). InnerVolumeSpecName "kube-api-access-qp8ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:15:22.479861 kubelet[2078]: I0906 00:15:22.479830 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "92bb802e-3560-46ec-9f62-7cd2b4c2aba4" (UID: "92bb802e-3560-46ec-9f62-7cd2b4c2aba4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:15:22.556169 kubelet[2078]: I0906 00:15:22.554338 2078 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556334 kubelet[2078]: I0906 00:15:22.556314 2078 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556395 kubelet[2078]: I0906 00:15:22.556386 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556452 kubelet[2078]: I0906 00:15:22.556442 2078 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556515 kubelet[2078]: I0906 00:15:22.556505 2078 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556571 kubelet[2078]: I0906 00:15:22.556560 2078 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z9r7\" (UniqueName: \"kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-kube-api-access-7z9r7\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556628 kubelet[2078]: I0906 00:15:22.556618 2078 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556692 kubelet[2078]: I0906 00:15:22.556682 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556764 kubelet[2078]: I0906 00:15:22.556753 2078 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556827 kubelet[2078]: I0906 00:15:22.556817 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556885 kubelet[2078]: I0906 00:15:22.556875 2078 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.556938 kubelet[2078]: I0906 00:15:22.556928 2078 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.557051 kubelet[2078]: I0906 00:15:22.557039 2078 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.557123 kubelet[2078]: I0906 00:15:22.557104 2078 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/b6cf33f3-319b-467e-9beb-91a9351f56bd-kube-api-access-qp8ps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.557182 kubelet[2078]: I0906 00:15:22.557173 2078 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92bb802e-3560-46ec-9f62-7cd2b4c2aba4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.557240 kubelet[2078]: I0906 00:15:22.557230 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6cf33f3-319b-467e-9beb-91a9351f56bd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:22.621936 kubelet[2078]: I0906 00:15:22.621886 2078 scope.go:117] "RemoveContainer" containerID="c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a" Sep 6 00:15:22.625298 env[1321]: time="2025-09-06T00:15:22.625258866Z" level=info msg="RemoveContainer for \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\"" Sep 6 00:15:22.635105 env[1321]: time="2025-09-06T00:15:22.635065165Z" level=info msg="RemoveContainer for \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\" returns successfully" Sep 6 00:15:22.636071 kubelet[2078]: I0906 00:15:22.636023 2078 scope.go:117] "RemoveContainer" containerID="c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a" Sep 6 00:15:22.636356 env[1321]: time="2025-09-06T00:15:22.636250532Z" level=error msg="ContainerStatus for \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\": not found" Sep 6 00:15:22.636487 kubelet[2078]: E0906 00:15:22.636461 2078 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\": not found" containerID="c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a" Sep 6 00:15:22.636566 kubelet[2078]: I0906 00:15:22.636493 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a"} err="failed to get container status \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3658cd1938bb8d8f31067dda853cedc1414facb596a04e37421a7e51b88956a\": not found" Sep 6 00:15:22.636566 kubelet[2078]: I0906 00:15:22.636565 2078 scope.go:117] "RemoveContainer" containerID="2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0" Sep 6 00:15:22.637894 env[1321]: time="2025-09-06T00:15:22.637855982Z" level=info msg="RemoveContainer for \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\"" Sep 6 00:15:22.641080 env[1321]: time="2025-09-06T00:15:22.641039361Z" level=info msg="RemoveContainer for \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\" returns successfully" Sep 6 00:15:22.641288 kubelet[2078]: I0906 00:15:22.641266 2078 scope.go:117] "RemoveContainer" containerID="34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a" Sep 6 00:15:22.642223 env[1321]: time="2025-09-06T00:15:22.642196608Z" level=info msg="RemoveContainer for \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\"" Sep 6 00:15:22.645585 env[1321]: time="2025-09-06T00:15:22.645534268Z" level=info msg="RemoveContainer for \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\" returns successfully" Sep 6 00:15:22.645812 kubelet[2078]: I0906 00:15:22.645791 2078 scope.go:117] "RemoveContainer" containerID="19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02" Sep 6 00:15:22.647244 env[1321]: time="2025-09-06T00:15:22.647213918Z" level=info msg="RemoveContainer for \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\"" Sep 6 00:15:22.649799 env[1321]: time="2025-09-06T00:15:22.649769414Z" level=info msg="RemoveContainer for \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\" returns successfully" Sep 6 00:15:22.650027 kubelet[2078]: I0906 00:15:22.650010 2078 scope.go:117] "RemoveContainer" containerID="1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f" Sep 6 00:15:22.651095 env[1321]: time="2025-09-06T00:15:22.651056061Z" level=info msg="RemoveContainer for \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\"" Sep 6 00:15:22.655089 env[1321]: time="2025-09-06T00:15:22.655022245Z" level=info msg="RemoveContainer for \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\" returns successfully" Sep 6 00:15:22.655297 kubelet[2078]: I0906 00:15:22.655272 2078 scope.go:117] "RemoveContainer" containerID="3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61" Sep 6 00:15:22.657059 env[1321]: time="2025-09-06T00:15:22.657023258Z" level=info msg="RemoveContainer for \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\"" Sep 6 00:15:22.665170 env[1321]: time="2025-09-06T00:15:22.664557023Z" level=info msg="RemoveContainer for \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\" returns successfully" Sep 6 00:15:22.665462 kubelet[2078]: I0906 00:15:22.665438 2078 scope.go:117] "RemoveContainer" containerID="2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0" Sep 6 00:15:22.666037 env[1321]: time="2025-09-06T00:15:22.665932551Z" level=error msg="ContainerStatus for \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\": not found" Sep 6 00:15:22.666260 kubelet[2078]: E0906 00:15:22.666237 2078 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\": not found" containerID="2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0" Sep 6 00:15:22.666369 kubelet[2078]: I0906 00:15:22.666341 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0"} err="failed to get container status \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d3fc00963bfb0b6768795013473b760c5cc58f1fa29bf5f98c6ad1bc08970a0\": not found" Sep 6 00:15:22.666443 kubelet[2078]: I0906 00:15:22.666432 2078 scope.go:117] "RemoveContainer" containerID="34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a" Sep 6 00:15:22.666840 env[1321]: time="2025-09-06T00:15:22.666728996Z" level=error msg="ContainerStatus for \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\": not found" Sep 6 00:15:22.666918 kubelet[2078]: E0906 00:15:22.666882 2078 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\": not found" containerID="34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a" Sep 6 00:15:22.666950 kubelet[2078]: I0906 00:15:22.666914 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a"} err="failed to get container status \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\": rpc error: code = NotFound desc = an error occurred when try to find container \"34ada37c07f2e6e5851fb6dbd3bf36ea5f7d468235bebe0d6e597cfdb602d30a\": not found" Sep 6 00:15:22.666950 kubelet[2078]: I0906 00:15:22.666933 2078 scope.go:117] "RemoveContainer" containerID="19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02" Sep 6 00:15:22.667650 env[1321]: time="2025-09-06T00:15:22.667565361Z" level=error msg="ContainerStatus for \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\": not found" Sep 6 00:15:22.668524 kubelet[2078]: E0906 00:15:22.667886 2078 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\": not found" containerID="19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02" Sep 6 00:15:22.668975 kubelet[2078]: I0906 00:15:22.668659 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02"} err="failed to get container status \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\": rpc error: code = NotFound desc = an error occurred when try to find container \"19e3494cc1425502e89b997796025b4ee08267c76d6844d3d6fded8e702bdd02\": not found" Sep 6 00:15:22.668975 kubelet[2078]: I0906 00:15:22.668907 2078 scope.go:117] "RemoveContainer" containerID="1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f" Sep 6 00:15:22.669804 env[1321]: time="2025-09-06T00:15:22.669691974Z" level=error msg="ContainerStatus for \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\": not found" Sep 6 00:15:22.669902 kubelet[2078]: E0906 00:15:22.669873 2078 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\": not found" containerID="1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f" Sep 6 00:15:22.669940 kubelet[2078]: I0906 00:15:22.669902 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f"} err="failed to get container status \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1528c7a60cbe58bd8a015bf94a12051266e225f266f151125732840ed77f4d2f\": not found" Sep 6 00:15:22.669940 kubelet[2078]: I0906 00:15:22.669918 2078 scope.go:117] "RemoveContainer" containerID="3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61" Sep 6 00:15:22.670377 env[1321]: time="2025-09-06T00:15:22.670244897Z" level=error msg="ContainerStatus for \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\": not found" Sep 6 00:15:22.670973 kubelet[2078]: E0906 00:15:22.670942 2078 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\": not found" containerID="3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61" Sep 6 00:15:22.671094 kubelet[2078]: I0906 00:15:22.671072 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61"} err="failed to get container status \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a4a4210397a20aa175fa2a88f30474c5043556ec09b18918941142f83ae9c61\": not found" Sep 6 00:15:23.259959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4329627a948055ef57e90d5c6187a6c278964a44a0555fa48574e56f0f89bf1c-rootfs.mount: Deactivated successfully. Sep 6 00:15:23.260105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9-rootfs.mount: Deactivated successfully. Sep 6 00:15:23.260294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99e6149920e90919c8a27af8803e5bdd07fab99d06e56c6d8c28a802739585c9-shm.mount: Deactivated successfully. Sep 6 00:15:23.260376 systemd[1]: var-lib-kubelet-pods-b6cf33f3\x2d319b\x2d467e\x2d9beb\x2d91a9351f56bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqp8ps.mount: Deactivated successfully. Sep 6 00:15:23.260455 systemd[1]: var-lib-kubelet-pods-92bb802e\x2d3560\x2d46ec\x2d9f62\x2d7cd2b4c2aba4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7z9r7.mount: Deactivated successfully. Sep 6 00:15:23.260537 systemd[1]: var-lib-kubelet-pods-92bb802e\x2d3560\x2d46ec\x2d9f62\x2d7cd2b4c2aba4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:15:23.260612 systemd[1]: var-lib-kubelet-pods-92bb802e\x2d3560\x2d46ec\x2d9f62\x2d7cd2b4c2aba4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:15:24.211285 sshd[3722]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:24.217664 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:49472.service. Sep 6 00:15:24.218520 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:40166.service: Deactivated successfully. Sep 6 00:15:24.220956 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:15:24.221070 systemd-logind[1305]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:15:24.223169 systemd-logind[1305]: Removed session 23. Sep 6 00:15:24.273858 sshd[3892]: Accepted publickey for core from 10.0.0.1 port 49472 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:24.275214 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:24.279416 systemd-logind[1305]: New session 24 of user core. Sep 6 00:15:24.280430 systemd[1]: Started session-24.scope. Sep 6 00:15:24.410708 kubelet[2078]: I0906 00:15:24.410628 2078 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bb802e-3560-46ec-9f62-7cd2b4c2aba4" path="/var/lib/kubelet/pods/92bb802e-3560-46ec-9f62-7cd2b4c2aba4/volumes" Sep 6 00:15:24.411335 kubelet[2078]: I0906 00:15:24.411303 2078 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cf33f3-319b-467e-9beb-91a9351f56bd" path="/var/lib/kubelet/pods/b6cf33f3-319b-467e-9beb-91a9351f56bd/volumes" Sep 6 00:15:24.511004 kubelet[2078]: E0906 00:15:24.510892 2078 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:15:25.407614 kubelet[2078]: E0906 00:15:25.407564 2078 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-nznrb" podUID="eb2ead0a-e17b-4505-b3ac-1f4fce3667cc" Sep 6 00:15:25.655353 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:49484.service. Sep 6 00:15:25.656225 sshd[3892]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:25.670976 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:49472.service: Deactivated successfully. Sep 6 00:15:25.672035 systemd-logind[1305]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:15:25.672036 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:15:25.673135 systemd-logind[1305]: Removed session 24. Sep 6 00:15:25.681910 kubelet[2078]: E0906 00:15:25.681644 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92bb802e-3560-46ec-9f62-7cd2b4c2aba4" containerName="apply-sysctl-overwrites" Sep 6 00:15:25.681910 kubelet[2078]: E0906 00:15:25.681674 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92bb802e-3560-46ec-9f62-7cd2b4c2aba4" containerName="mount-cgroup" Sep 6 00:15:25.681910 kubelet[2078]: E0906 00:15:25.681681 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6cf33f3-319b-467e-9beb-91a9351f56bd" containerName="cilium-operator" Sep 6 00:15:25.681910 kubelet[2078]: E0906 00:15:25.681686 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92bb802e-3560-46ec-9f62-7cd2b4c2aba4" containerName="mount-bpf-fs" Sep 6 00:15:25.681910 kubelet[2078]: E0906 00:15:25.681691 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92bb802e-3560-46ec-9f62-7cd2b4c2aba4" containerName="clean-cilium-state" Sep 6 00:15:25.681910 kubelet[2078]: E0906 00:15:25.681696 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92bb802e-3560-46ec-9f62-7cd2b4c2aba4" containerName="cilium-agent" Sep 6 00:15:25.681910 kubelet[2078]: I0906 00:15:25.681720 2078 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cf33f3-319b-467e-9beb-91a9351f56bd" containerName="cilium-operator" Sep 6 00:15:25.681910 kubelet[2078]: I0906 00:15:25.681726 2078 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bb802e-3560-46ec-9f62-7cd2b4c2aba4" containerName="cilium-agent" Sep 6 00:15:25.710314 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 49484 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:25.708574 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:25.714901 systemd[1]: Started session-25.scope. Sep 6 00:15:25.715282 systemd-logind[1305]: New session 25 of user core. Sep 6 00:15:25.844809 sshd[3904]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:25.847198 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:49500.service. Sep 6 00:15:25.850854 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:49484.service: Deactivated successfully. Sep 6 00:15:25.854662 kubelet[2078]: E0906 00:15:25.853207 2078 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-cv7g7 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-tbpvh" podUID="ef375268-0fd5-43cb-92bb-181d82d7b6dd" Sep 6 00:15:25.853811 systemd-logind[1305]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:15:25.854246 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:15:25.861871 systemd-logind[1305]: Removed session 25. Sep 6 00:15:25.879205 kubelet[2078]: I0906 00:15:25.879175 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7g7\" (UniqueName: \"kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-kube-api-access-cv7g7\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879356 kubelet[2078]: I0906 00:15:25.879337 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-ipsec-secrets\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879463 kubelet[2078]: I0906 00:15:25.879448 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-etc-cni-netd\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879538 kubelet[2078]: I0906 00:15:25.879525 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-lib-modules\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879626 kubelet[2078]: I0906 00:15:25.879613 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-cgroup\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879701 kubelet[2078]: I0906 00:15:25.879687 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cni-path\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879793 kubelet[2078]: I0906 00:15:25.879780 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hubble-tls\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879875 kubelet[2078]: I0906 00:15:25.879862 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-bpf-maps\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.879945 kubelet[2078]: I0906 00:15:25.879933 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-clustermesh-secrets\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.880013 kubelet[2078]: I0906 00:15:25.880001 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-kernel\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.880140 kubelet[2078]: I0906 00:15:25.880121 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-net\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.880244 kubelet[2078]: I0906 00:15:25.880227 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-xtables-lock\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.880333 kubelet[2078]: I0906 00:15:25.880319 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-run\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.880411 kubelet[2078]: I0906 00:15:25.880399 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-config-path\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.880486 kubelet[2078]: I0906 00:15:25.880474 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hostproc\") pod \"cilium-tbpvh\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " pod="kube-system/cilium-tbpvh" Sep 6 00:15:25.896980 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 49500 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:15:25.898289 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:15:25.902297 systemd-logind[1305]: New session 26 of user core. Sep 6 00:15:25.902632 systemd[1]: Started session-26.scope. Sep 6 00:15:26.483831 kubelet[2078]: I0906 00:15:26.483727 2078 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:15:26Z","lastTransitionTime":"2025-09-06T00:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:15:26.684706 kubelet[2078]: I0906 00:15:26.684659 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv7g7\" (UniqueName: \"kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-kube-api-access-cv7g7\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685073 kubelet[2078]: I0906 00:15:26.684820 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cni-path\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685073 kubelet[2078]: I0906 00:15:26.684848 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-kernel\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685073 kubelet[2078]: I0906 00:15:26.684867 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-run\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685073 kubelet[2078]: I0906 00:15:26.684881 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hostproc\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685073 kubelet[2078]: I0906 00:15:26.684899 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-cgroup\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685073 kubelet[2078]: I0906 00:15:26.684918 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hubble-tls\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685227 kubelet[2078]: I0906 00:15:26.684933 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-net\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685227 kubelet[2078]: I0906 00:15:26.684953 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-ipsec-secrets\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685227 kubelet[2078]: I0906 00:15:26.684975 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-clustermesh-secrets\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685227 kubelet[2078]: I0906 00:15:26.684990 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-etc-cni-netd\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685227 kubelet[2078]: I0906 00:15:26.685006 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-lib-modules\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685227 kubelet[2078]: I0906 00:15:26.685024 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-bpf-maps\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685349 kubelet[2078]: I0906 00:15:26.685052 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-xtables-lock\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685349 kubelet[2078]: I0906 00:15:26.685073 2078 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-config-path\") pod \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\" (UID: \"ef375268-0fd5-43cb-92bb-181d82d7b6dd\") " Sep 6 00:15:26.685349 kubelet[2078]: I0906 00:15:26.684923 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.685349 kubelet[2078]: I0906 00:15:26.684947 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cni-path" (OuterVolumeSpecName: "cni-path") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.685349 kubelet[2078]: I0906 00:15:26.684960 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.685503 kubelet[2078]: I0906 00:15:26.684972 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.685503 kubelet[2078]: I0906 00:15:26.684982 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hostproc" (OuterVolumeSpecName: "hostproc") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.685503 kubelet[2078]: I0906 00:15:26.684992 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.685503 kubelet[2078]: I0906 00:15:26.685140 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.686827 kubelet[2078]: I0906 00:15:26.686777 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:15:26.686905 kubelet[2078]: I0906 00:15:26.686835 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.686905 kubelet[2078]: I0906 00:15:26.686854 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.686905 kubelet[2078]: I0906 00:15:26.686871 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:15:26.689247 kubelet[2078]: I0906 00:15:26.687761 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:15:26.689345 kubelet[2078]: I0906 00:15:26.689301 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:15:26.689863 systemd[1]: var-lib-kubelet-pods-ef375268\x2d0fd5\x2d43cb\x2d92bb\x2d181d82d7b6dd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:15:26.690195 kubelet[2078]: I0906 00:15:26.690155 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-kube-api-access-cv7g7" (OuterVolumeSpecName: "kube-api-access-cv7g7") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "kube-api-access-cv7g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:15:26.690371 kubelet[2078]: I0906 00:15:26.690353 2078 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ef375268-0fd5-43cb-92bb-181d82d7b6dd" (UID: "ef375268-0fd5-43cb-92bb-181d82d7b6dd"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:15:26.693163 systemd[1]: var-lib-kubelet-pods-ef375268\x2d0fd5\x2d43cb\x2d92bb\x2d181d82d7b6dd-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:15:26.693350 systemd[1]: var-lib-kubelet-pods-ef375268\x2d0fd5\x2d43cb\x2d92bb\x2d181d82d7b6dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcv7g7.mount: Deactivated successfully. Sep 6 00:15:26.693435 systemd[1]: var-lib-kubelet-pods-ef375268\x2d0fd5\x2d43cb\x2d92bb\x2d181d82d7b6dd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:15:26.785817 kubelet[2078]: I0906 00:15:26.785707 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.785987 kubelet[2078]: I0906 00:15:26.785973 2078 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786052 kubelet[2078]: I0906 00:15:26.786040 2078 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786105 kubelet[2078]: I0906 00:15:26.786096 2078 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786174 kubelet[2078]: I0906 00:15:26.786164 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786232 kubelet[2078]: I0906 00:15:26.786223 2078 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786290 kubelet[2078]: I0906 00:15:26.786281 2078 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786344 kubelet[2078]: I0906 00:15:26.786335 2078 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786394 kubelet[2078]: I0906 00:15:26.786386 2078 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786445 kubelet[2078]: I0906 00:15:26.786437 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786494 kubelet[2078]: I0906 00:15:26.786485 2078 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv7g7\" (UniqueName: \"kubernetes.io/projected/ef375268-0fd5-43cb-92bb-181d82d7b6dd-kube-api-access-cv7g7\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786546 kubelet[2078]: I0906 00:15:26.786538 2078 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786599 kubelet[2078]: I0906 00:15:26.786590 2078 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786650 kubelet[2078]: I0906 00:15:26.786642 2078 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:26.786699 kubelet[2078]: I0906 00:15:26.786690 2078 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef375268-0fd5-43cb-92bb-181d82d7b6dd-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:15:27.407578 kubelet[2078]: E0906 00:15:27.407520 2078 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-nznrb" podUID="eb2ead0a-e17b-4505-b3ac-1f4fce3667cc" Sep 6 00:15:27.693403 kubelet[2078]: W0906 00:15:27.693292 2078 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:15:27.693403 kubelet[2078]: E0906 00:15:27.693342 2078 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:15:27.696574 kubelet[2078]: W0906 00:15:27.696546 2078 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:15:27.696702 kubelet[2078]: E0906 00:15:27.696684 2078 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:15:27.696834 kubelet[2078]: W0906 00:15:27.696818 2078 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:15:27.696910 kubelet[2078]: E0906 00:15:27.696894 2078 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:15:27.697034 kubelet[2078]: W0906 00:15:27.697019 2078 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 6 00:15:27.697106 kubelet[2078]: E0906 00:15:27.697091 2078 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 6 00:15:27.793272 kubelet[2078]: I0906 00:15:27.793229 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-cni-path\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.793446 kubelet[2078]: I0906 00:15:27.793430 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-cilium-run\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.793565 kubelet[2078]: I0906 00:15:27.793551 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-host-proc-sys-net\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.793651 kubelet[2078]: I0906 00:15:27.793637 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-bpf-maps\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.793730 kubelet[2078]: I0906 00:15:27.793714 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-etc-cni-netd\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.793842 kubelet[2078]: I0906 00:15:27.793829 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73eb73cd-c922-443d-a54a-1e5de6deeb3b-hubble-tls\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.793930 kubelet[2078]: I0906 00:15:27.793915 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4kft\" (UniqueName: \"kubernetes.io/projected/73eb73cd-c922-443d-a54a-1e5de6deeb3b-kube-api-access-g4kft\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794038 kubelet[2078]: I0906 00:15:27.794023 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-hostproc\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794133 kubelet[2078]: I0906 00:15:27.794111 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-cilium-cgroup\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794212 kubelet[2078]: I0906 00:15:27.794200 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73eb73cd-c922-443d-a54a-1e5de6deeb3b-cilium-config-path\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794297 kubelet[2078]: I0906 00:15:27.794284 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73eb73cd-c922-443d-a54a-1e5de6deeb3b-cilium-ipsec-secrets\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794373 kubelet[2078]: I0906 00:15:27.794357 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-host-proc-sys-kernel\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794465 kubelet[2078]: I0906 00:15:27.794451 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-lib-modules\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794550 kubelet[2078]: I0906 00:15:27.794535 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73eb73cd-c922-443d-a54a-1e5de6deeb3b-clustermesh-secrets\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:27.794637 kubelet[2078]: I0906 00:15:27.794624 2078 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73eb73cd-c922-443d-a54a-1e5de6deeb3b-xtables-lock\") pod \"cilium-2fnn6\" (UID: \"73eb73cd-c922-443d-a54a-1e5de6deeb3b\") " pod="kube-system/cilium-2fnn6" Sep 6 00:15:28.408867 kubelet[2078]: I0906 00:15:28.408829 2078 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef375268-0fd5-43cb-92bb-181d82d7b6dd" path="/var/lib/kubelet/pods/ef375268-0fd5-43cb-92bb-181d82d7b6dd/volumes" Sep 6 00:15:28.896554 kubelet[2078]: E0906 00:15:28.896500 2078 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 6 00:15:28.896924 kubelet[2078]: E0906 00:15:28.896598 2078 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73eb73cd-c922-443d-a54a-1e5de6deeb3b-clustermesh-secrets podName:73eb73cd-c922-443d-a54a-1e5de6deeb3b nodeName:}" failed. No retries permitted until 2025-09-06 00:15:29.396576315 +0000 UTC m=+95.075241468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/73eb73cd-c922-443d-a54a-1e5de6deeb3b-clustermesh-secrets") pod "cilium-2fnn6" (UID: "73eb73cd-c922-443d-a54a-1e5de6deeb3b") : failed to sync secret cache: timed out waiting for the condition Sep 6 00:15:29.408841 kubelet[2078]: E0906 00:15:29.406830 2078 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-nznrb" podUID="eb2ead0a-e17b-4505-b3ac-1f4fce3667cc" Sep 6 00:15:29.497678 kubelet[2078]: E0906 00:15:29.497646 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:29.498450 env[1321]: time="2025-09-06T00:15:29.498408297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fnn6,Uid:73eb73cd-c922-443d-a54a-1e5de6deeb3b,Namespace:kube-system,Attempt:0,}" Sep 6 00:15:29.512500 kubelet[2078]: E0906 00:15:29.512454 2078 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:15:29.513110 env[1321]: time="2025-09-06T00:15:29.512911651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:15:29.513110 env[1321]: time="2025-09-06T00:15:29.512949011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:15:29.513110 env[1321]: time="2025-09-06T00:15:29.512960291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:15:29.513251 env[1321]: time="2025-09-06T00:15:29.513111892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4 pid=3952 runtime=io.containerd.runc.v2 Sep 6 00:15:29.554303 env[1321]: time="2025-09-06T00:15:29.554260860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2fnn6,Uid:73eb73cd-c922-443d-a54a-1e5de6deeb3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\"" Sep 6 00:15:29.555142 kubelet[2078]: E0906 00:15:29.555103 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:29.558541 env[1321]: time="2025-09-06T00:15:29.558500922Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:15:29.576113 env[1321]: time="2025-09-06T00:15:29.576063771Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"70fea6746365ca0fd1f037f8f4bedcf8a4319d9890b37b02e31fa8cbf2b5a7d6\"" Sep 6 00:15:29.576824 env[1321]: time="2025-09-06T00:15:29.576754414Z" level=info msg="StartContainer for \"70fea6746365ca0fd1f037f8f4bedcf8a4319d9890b37b02e31fa8cbf2b5a7d6\"" Sep 6 00:15:29.633435 env[1321]: time="2025-09-06T00:15:29.633390181Z" level=info msg="StartContainer for \"70fea6746365ca0fd1f037f8f4bedcf8a4319d9890b37b02e31fa8cbf2b5a7d6\" returns successfully" Sep 6 00:15:29.651491 kubelet[2078]: E0906 00:15:29.651447 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:29.668654 env[1321]: time="2025-09-06T00:15:29.668548599Z" level=info msg="shim disconnected" id=70fea6746365ca0fd1f037f8f4bedcf8a4319d9890b37b02e31fa8cbf2b5a7d6 Sep 6 00:15:29.668880 env[1321]: time="2025-09-06T00:15:29.668859481Z" level=warning msg="cleaning up after shim disconnected" id=70fea6746365ca0fd1f037f8f4bedcf8a4319d9890b37b02e31fa8cbf2b5a7d6 namespace=k8s.io Sep 6 00:15:29.668972 env[1321]: time="2025-09-06T00:15:29.668958242Z" level=info msg="cleaning up dead shim" Sep 6 00:15:29.679042 env[1321]: time="2025-09-06T00:15:29.678994652Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4035 runtime=io.containerd.runc.v2\n" Sep 6 00:15:30.412100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950951242.mount: Deactivated successfully. Sep 6 00:15:30.654698 kubelet[2078]: E0906 00:15:30.654250 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:30.656647 env[1321]: time="2025-09-06T00:15:30.656594326Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:15:30.667725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149516089.mount: Deactivated successfully. Sep 6 00:15:30.676829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535665656.mount: Deactivated successfully. Sep 6 00:15:30.680425 env[1321]: time="2025-09-06T00:15:30.680374443Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d38a0be6dd9e99540b21b4cd4c72dd7ddde4e48389d9d81b870af827d4d0ecd\"" Sep 6 00:15:30.682203 env[1321]: time="2025-09-06T00:15:30.680907846Z" level=info msg="StartContainer for \"7d38a0be6dd9e99540b21b4cd4c72dd7ddde4e48389d9d81b870af827d4d0ecd\"" Sep 6 00:15:30.736782 env[1321]: time="2025-09-06T00:15:30.735856998Z" level=info msg="StartContainer for \"7d38a0be6dd9e99540b21b4cd4c72dd7ddde4e48389d9d81b870af827d4d0ecd\" returns successfully" Sep 6 00:15:30.764113 env[1321]: time="2025-09-06T00:15:30.764067337Z" level=info msg="shim disconnected" id=7d38a0be6dd9e99540b21b4cd4c72dd7ddde4e48389d9d81b870af827d4d0ecd Sep 6 00:15:30.764401 env[1321]: time="2025-09-06T00:15:30.764381419Z" level=warning msg="cleaning up after shim disconnected" id=7d38a0be6dd9e99540b21b4cd4c72dd7ddde4e48389d9d81b870af827d4d0ecd namespace=k8s.io Sep 6 00:15:30.764468 env[1321]: time="2025-09-06T00:15:30.764455339Z" level=info msg="cleaning up dead shim" Sep 6 00:15:30.772692 env[1321]: time="2025-09-06T00:15:30.772649820Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4097 runtime=io.containerd.runc.v2\n" Sep 6 00:15:31.407865 kubelet[2078]: E0906 00:15:31.407794 2078 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-nznrb" podUID="eb2ead0a-e17b-4505-b3ac-1f4fce3667cc" Sep 6 00:15:31.657723 kubelet[2078]: E0906 00:15:31.657663 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:31.659666 env[1321]: time="2025-09-06T00:15:31.659577927Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:15:31.670400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715244841.mount: Deactivated successfully. Sep 6 00:15:31.679132 env[1321]: time="2025-09-06T00:15:31.679076622Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aaeea88d99202c3bf5d29d77e7bb407850eb60c6bfe9ef560dcb98e16d2ace5f\"" Sep 6 00:15:31.680203 env[1321]: time="2025-09-06T00:15:31.680170067Z" level=info msg="StartContainer for \"aaeea88d99202c3bf5d29d77e7bb407850eb60c6bfe9ef560dcb98e16d2ace5f\"" Sep 6 00:15:31.732941 env[1321]: time="2025-09-06T00:15:31.732883681Z" level=info msg="StartContainer for \"aaeea88d99202c3bf5d29d77e7bb407850eb60c6bfe9ef560dcb98e16d2ace5f\" returns successfully" Sep 6 00:15:31.757279 env[1321]: time="2025-09-06T00:15:31.757222639Z" level=info msg="shim disconnected" id=aaeea88d99202c3bf5d29d77e7bb407850eb60c6bfe9ef560dcb98e16d2ace5f Sep 6 00:15:31.757467 env[1321]: time="2025-09-06T00:15:31.757368319Z" level=warning msg="cleaning up after shim disconnected" id=aaeea88d99202c3bf5d29d77e7bb407850eb60c6bfe9ef560dcb98e16d2ace5f namespace=k8s.io Sep 6 00:15:31.757467 env[1321]: time="2025-09-06T00:15:31.757380759Z" level=info msg="cleaning up dead shim" Sep 6 00:15:31.765383 env[1321]: time="2025-09-06T00:15:31.765333558Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4156 runtime=io.containerd.runc.v2\n" Sep 6 00:15:32.412274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaeea88d99202c3bf5d29d77e7bb407850eb60c6bfe9ef560dcb98e16d2ace5f-rootfs.mount: Deactivated successfully. Sep 6 00:15:32.669950 kubelet[2078]: E0906 00:15:32.669830 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:32.673472 env[1321]: time="2025-09-06T00:15:32.672464739Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:15:32.693040 env[1321]: time="2025-09-06T00:15:32.692987995Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e77378b7d5931f8eeb484619f10b11368ded3c47b2a96b663a7d96b457b12eea\"" Sep 6 00:15:32.694036 env[1321]: time="2025-09-06T00:15:32.694004840Z" level=info msg="StartContainer for \"e77378b7d5931f8eeb484619f10b11368ded3c47b2a96b663a7d96b457b12eea\"" Sep 6 00:15:32.748542 env[1321]: time="2025-09-06T00:15:32.748482177Z" level=info msg="StartContainer for \"e77378b7d5931f8eeb484619f10b11368ded3c47b2a96b663a7d96b457b12eea\" returns successfully" Sep 6 00:15:32.767730 env[1321]: time="2025-09-06T00:15:32.767686307Z" level=info msg="shim disconnected" id=e77378b7d5931f8eeb484619f10b11368ded3c47b2a96b663a7d96b457b12eea Sep 6 00:15:32.767730 env[1321]: time="2025-09-06T00:15:32.767731148Z" level=warning msg="cleaning up after shim disconnected" id=e77378b7d5931f8eeb484619f10b11368ded3c47b2a96b663a7d96b457b12eea namespace=k8s.io Sep 6 00:15:32.767987 env[1321]: time="2025-09-06T00:15:32.767751468Z" level=info msg="cleaning up dead shim" Sep 6 00:15:32.774565 env[1321]: time="2025-09-06T00:15:32.774520660Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:15:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4211 runtime=io.containerd.runc.v2\n" Sep 6 00:15:33.407385 kubelet[2078]: E0906 00:15:33.407019 2078 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-nznrb" podUID="eb2ead0a-e17b-4505-b3ac-1f4fce3667cc" Sep 6 00:15:33.413931 systemd[1]: run-containerd-runc-k8s.io-e77378b7d5931f8eeb484619f10b11368ded3c47b2a96b663a7d96b457b12eea-runc.Yb2yRJ.mount: Deactivated successfully. Sep 6 00:15:33.414087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e77378b7d5931f8eeb484619f10b11368ded3c47b2a96b663a7d96b457b12eea-rootfs.mount: Deactivated successfully. Sep 6 00:15:33.678428 kubelet[2078]: E0906 00:15:33.676283 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:33.709098 env[1321]: time="2025-09-06T00:15:33.709035744Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:15:33.729483 env[1321]: time="2025-09-06T00:15:33.729430638Z" level=info msg="CreateContainer within sandbox \"4b935a75e36609b6093c3b613d93d9049f3ca3e0bee822d565085cf4b22484e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3b168b53d3c8fbcb106df0f9f10f8393f46ee305c867d3343dce3d31bf62955\"" Sep 6 00:15:33.730010 env[1321]: time="2025-09-06T00:15:33.729983121Z" level=info msg="StartContainer for \"f3b168b53d3c8fbcb106df0f9f10f8393f46ee305c867d3343dce3d31bf62955\"" Sep 6 00:15:33.792033 env[1321]: time="2025-09-06T00:15:33.791987166Z" level=info msg="StartContainer for \"f3b168b53d3c8fbcb106df0f9f10f8393f46ee305c867d3343dce3d31bf62955\" returns successfully" Sep 6 00:15:34.037757 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 00:15:34.695156 kubelet[2078]: E0906 00:15:34.695109 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:34.722691 kubelet[2078]: I0906 00:15:34.722624 2078 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2fnn6" podStartSLOduration=7.722605449 podStartE2EDuration="7.722605449s" podCreationTimestamp="2025-09-06 00:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:15:34.72055496 +0000 UTC m=+100.399220113" watchObservedRunningTime="2025-09-06 00:15:34.722605449 +0000 UTC m=+100.401270602" Sep 6 00:15:35.407623 kubelet[2078]: E0906 00:15:35.407575 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:35.696955 kubelet[2078]: E0906 00:15:35.696570 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:36.308264 systemd[1]: run-containerd-runc-k8s.io-f3b168b53d3c8fbcb106df0f9f10f8393f46ee305c867d3343dce3d31bf62955-runc.n3MJVQ.mount: Deactivated successfully. Sep 6 00:15:36.941757 systemd-networkd[1099]: lxc_health: Link UP Sep 6 00:15:36.950442 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:15:36.949977 systemd-networkd[1099]: lxc_health: Gained carrier Sep 6 00:15:37.504437 kubelet[2078]: E0906 00:15:37.504405 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:37.699534 kubelet[2078]: E0906 00:15:37.699503 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:38.445822 systemd[1]: run-containerd-runc-k8s.io-f3b168b53d3c8fbcb106df0f9f10f8393f46ee305c867d3343dce3d31bf62955-runc.ms8HvJ.mount: Deactivated successfully. Sep 6 00:15:38.701573 kubelet[2078]: E0906 00:15:38.701442 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:38.847894 systemd-networkd[1099]: lxc_health: Gained IPv6LL Sep 6 00:15:40.574864 systemd[1]: run-containerd-runc-k8s.io-f3b168b53d3c8fbcb106df0f9f10f8393f46ee305c867d3343dce3d31bf62955-runc.TI6S3O.mount: Deactivated successfully. Sep 6 00:15:42.408706 kubelet[2078]: E0906 00:15:42.407792 2078 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:15:42.784980 sshd[3919]: pam_unix(sshd:session): session closed for user core Sep 6 00:15:42.788106 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:49500.service: Deactivated successfully. Sep 6 00:15:42.789156 systemd-logind[1305]: Session 26 logged out. Waiting for processes to exit. Sep 6 00:15:42.789170 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 00:15:42.790386 systemd-logind[1305]: Removed session 26.