Sep 9 00:36:28.688965 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:36:28.688986 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Sep 8 23:23:23 -00 2025 Sep 9 00:36:28.688994 kernel: efi: EFI v2.70 by EDK II Sep 9 00:36:28.689000 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 9 00:36:28.689005 kernel: random: crng init done Sep 9 00:36:28.689010 kernel: ACPI: Early table checksum verification disabled Sep 9 00:36:28.689016 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 9 00:36:28.689023 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:36:28.689029 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689034 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689040 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689045 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689051 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689056 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689065 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689071 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689077 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:36:28.689083 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:36:28.689089 kernel: NUMA: Failed to initialise from firmware Sep 9 00:36:28.689095 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:36:28.689101 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 9 00:36:28.689107 kernel: Zone ranges: Sep 9 00:36:28.689113 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:36:28.689119 kernel: DMA32 empty Sep 9 00:36:28.689125 kernel: Normal empty Sep 9 00:36:28.689131 kernel: Movable zone start for each node Sep 9 00:36:28.689137 kernel: Early memory node ranges Sep 9 00:36:28.689143 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 9 00:36:28.689149 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 9 00:36:28.689155 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 9 00:36:28.689160 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 9 00:36:28.689169 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 9 00:36:28.689175 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 9 00:36:28.689181 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 9 00:36:28.689187 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:36:28.689195 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:36:28.689203 kernel: psci: probing for conduit method from ACPI. Sep 9 00:36:28.689209 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:36:28.689215 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:36:28.689222 kernel: psci: Trusted OS migration not required Sep 9 00:36:28.689232 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:36:28.689240 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:36:28.689248 kernel: ACPI: SRAT not present Sep 9 00:36:28.689254 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 9 00:36:28.689261 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 9 00:36:28.689267 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:36:28.689274 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:36:28.689280 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:36:28.689286 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:36:28.689292 kernel: CPU features: detected: Spectre-v4 Sep 9 00:36:28.689298 kernel: CPU features: detected: Spectre-BHB Sep 9 00:36:28.689306 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:36:28.689312 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:36:28.689319 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:36:28.689325 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:36:28.689332 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:36:28.689338 kernel: Policy zone: DMA Sep 9 00:36:28.689345 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:36:28.689352 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:36:28.689358 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:36:28.689365 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:36:28.689371 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:36:28.689379 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 9 00:36:28.689385 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:36:28.689391 kernel: trace event string verifier disabled Sep 9 00:36:28.689397 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:36:28.689404 kernel: rcu: RCU event tracing is enabled. Sep 9 00:36:28.689410 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:36:28.689417 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:36:28.689423 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:36:28.689429 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:36:28.689435 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:36:28.689441 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:36:28.689449 kernel: GICv3: 256 SPIs implemented Sep 9 00:36:28.689474 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:36:28.689481 kernel: GICv3: Distributor has no Range Selector support Sep 9 00:36:28.689487 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:36:28.689493 kernel: GICv3: 16 PPIs implemented Sep 9 00:36:28.689499 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:36:28.689505 kernel: ACPI: SRAT not present Sep 9 00:36:28.689512 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:36:28.689518 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:36:28.689524 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:36:28.689531 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 9 00:36:28.689537 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 9 00:36:28.689544 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:36:28.689550 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:36:28.689557 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:36:28.689563 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:36:28.689569 kernel: arm-pv: using stolen time PV Sep 9 00:36:28.689576 kernel: Console: colour dummy device 80x25 Sep 9 00:36:28.689583 kernel: ACPI: Core revision 20210730 Sep 9 00:36:28.689589 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:36:28.689596 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:36:28.689602 kernel: LSM: Security Framework initializing Sep 9 00:36:28.689610 kernel: SELinux: Initializing. Sep 9 00:36:28.689616 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:36:28.689623 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:36:28.689629 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:36:28.689635 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:36:28.689641 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:36:28.689647 kernel: Remapping and enabling EFI services. Sep 9 00:36:28.689654 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:36:28.689660 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:36:28.689667 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:36:28.689673 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 9 00:36:28.689680 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:36:28.689686 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:36:28.689692 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:36:28.689699 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:36:28.689706 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 9 00:36:28.689712 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:36:28.689718 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:36:28.689725 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:36:28.689732 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:36:28.689739 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 9 00:36:28.689745 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:36:28.689752 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:36:28.689770 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:36:28.689778 kernel: SMP: Total of 4 processors activated. Sep 9 00:36:28.689785 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:36:28.689791 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:36:28.689798 kernel: CPU features: detected: Common not Private translations Sep 9 00:36:28.689805 kernel: CPU features: detected: CRC32 instructions Sep 9 00:36:28.689811 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:36:28.689818 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:36:28.689825 kernel: CPU features: detected: Privileged Access Never Sep 9 00:36:28.689832 kernel: CPU features: detected: RAS Extension Support Sep 9 00:36:28.689839 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:36:28.689845 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:36:28.689852 kernel: alternatives: patching kernel code Sep 9 00:36:28.689859 kernel: devtmpfs: initialized Sep 9 00:36:28.689891 kernel: KASLR enabled Sep 9 00:36:28.689899 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:36:28.689906 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:36:28.689913 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:36:28.689919 kernel: SMBIOS 3.0.0 present. Sep 9 00:36:28.689926 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 9 00:36:28.689932 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:36:28.689939 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:36:28.689955 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:36:28.689962 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:36:28.689969 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:36:28.689976 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 Sep 9 00:36:28.689982 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:36:28.689989 kernel: cpuidle: using governor menu Sep 9 00:36:28.689996 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:36:28.690003 kernel: ASID allocator initialised with 32768 entries Sep 9 00:36:28.690009 kernel: ACPI: bus type PCI registered Sep 9 00:36:28.690017 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:36:28.690023 kernel: Serial: AMBA PL011 UART driver Sep 9 00:36:28.690030 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:36:28.690037 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:36:28.690044 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:36:28.690050 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:36:28.690057 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:36:28.690063 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:36:28.690070 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:36:28.690078 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:36:28.690085 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:36:28.690092 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 9 00:36:28.690098 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 9 00:36:28.690105 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 9 00:36:28.690111 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:36:28.690118 kernel: ACPI: Interpreter enabled Sep 9 00:36:28.690125 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:36:28.690131 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:36:28.690139 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:36:28.690146 kernel: printk: console [ttyAMA0] enabled Sep 9 00:36:28.690153 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:36:28.694167 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:36:28.694293 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:36:28.694364 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:36:28.694430 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:36:28.694500 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:36:28.694509 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:36:28.694516 kernel: PCI host bridge to bus 0000:00 Sep 9 00:36:28.694589 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:36:28.694650 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:36:28.694709 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:36:28.694781 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:36:28.694867 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:36:28.694966 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:36:28.695088 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:36:28.695186 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:36:28.695256 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:36:28.695322 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:36:28.695382 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:36:28.695447 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:36:28.695502 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:36:28.695559 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:36:28.695611 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:36:28.695620 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:36:28.695627 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:36:28.695634 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:36:28.695640 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:36:28.695648 kernel: iommu: Default domain type: Translated Sep 9 00:36:28.695656 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:36:28.695663 kernel: vgaarb: loaded Sep 9 00:36:28.695669 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 00:36:28.695676 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 00:36:28.695683 kernel: PTP clock support registered Sep 9 00:36:28.695689 kernel: Registered efivars operations Sep 9 00:36:28.695696 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:36:28.695703 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:36:28.695711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:36:28.695718 kernel: pnp: PnP ACPI init Sep 9 00:36:28.695798 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:36:28.695809 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:36:28.695816 kernel: NET: Registered PF_INET protocol family Sep 9 00:36:28.695823 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:36:28.695829 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:36:28.695836 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:36:28.695844 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:36:28.695851 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 9 00:36:28.695858 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:36:28.695865 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:36:28.695871 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:36:28.695878 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:36:28.695885 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:36:28.695891 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:36:28.695898 kernel: kvm [1]: HYP mode not available Sep 9 00:36:28.695906 kernel: Initialise system trusted keyrings Sep 9 00:36:28.695912 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:36:28.695919 kernel: Key type asymmetric registered Sep 9 00:36:28.695925 kernel: Asymmetric key parser 'x509' registered Sep 9 00:36:28.695932 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 00:36:28.695939 kernel: io scheduler mq-deadline registered Sep 9 00:36:28.695954 kernel: io scheduler kyber registered Sep 9 00:36:28.695961 kernel: io scheduler bfq registered Sep 9 00:36:28.695967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:36:28.695976 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:36:28.695983 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:36:28.696050 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:36:28.696060 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:36:28.696066 kernel: thunder_xcv, ver 1.0 Sep 9 00:36:28.696073 kernel: thunder_bgx, ver 1.0 Sep 9 00:36:28.696080 kernel: nicpf, ver 1.0 Sep 9 00:36:28.696086 kernel: nicvf, ver 1.0 Sep 9 00:36:28.696157 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:36:28.696216 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:36:28 UTC (1757378188) Sep 9 00:36:28.696225 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:36:28.696232 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:36:28.696239 kernel: Segment Routing with IPv6 Sep 9 00:36:28.696245 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:36:28.696252 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:36:28.696259 kernel: Key type dns_resolver registered Sep 9 00:36:28.696265 kernel: registered taskstats version 1 Sep 9 00:36:28.696274 kernel: Loading compiled-in X.509 certificates Sep 9 00:36:28.696281 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 14b3f28443a1a4b809c7c0337ab8c3dc8fdb5252' Sep 9 00:36:28.696287 kernel: Key type .fscrypt registered Sep 9 00:36:28.696294 kernel: Key type fscrypt-provisioning registered Sep 9 00:36:28.696300 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:36:28.696307 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:36:28.696313 kernel: ima: No architecture policies found Sep 9 00:36:28.696320 kernel: clk: Disabling unused clocks Sep 9 00:36:28.696327 kernel: Freeing unused kernel memory: 36416K Sep 9 00:36:28.696334 kernel: Run /init as init process Sep 9 00:36:28.696341 kernel: with arguments: Sep 9 00:36:28.696348 kernel: /init Sep 9 00:36:28.696354 kernel: with environment: Sep 9 00:36:28.696360 kernel: HOME=/ Sep 9 00:36:28.696367 kernel: TERM=linux Sep 9 00:36:28.696374 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:36:28.696382 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:36:28.696393 systemd[1]: Detected virtualization kvm. Sep 9 00:36:28.696400 systemd[1]: Detected architecture arm64. Sep 9 00:36:28.696407 systemd[1]: Running in initrd. Sep 9 00:36:28.696414 systemd[1]: No hostname configured, using default hostname. Sep 9 00:36:28.696420 systemd[1]: Hostname set to . Sep 9 00:36:28.696428 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:36:28.696435 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:36:28.696442 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:36:28.696450 systemd[1]: Reached target cryptsetup.target. Sep 9 00:36:28.696457 systemd[1]: Reached target paths.target. Sep 9 00:36:28.696464 systemd[1]: Reached target slices.target. Sep 9 00:36:28.696471 systemd[1]: Reached target swap.target. Sep 9 00:36:28.696478 systemd[1]: Reached target timers.target. Sep 9 00:36:28.696485 systemd[1]: Listening on iscsid.socket. Sep 9 00:36:28.696492 systemd[1]: Listening on iscsiuio.socket. Sep 9 00:36:28.696500 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:36:28.696508 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:36:28.696515 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:36:28.696522 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:36:28.696529 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:36:28.696536 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:36:28.696543 systemd[1]: Reached target sockets.target. Sep 9 00:36:28.696550 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:36:28.696558 systemd[1]: Finished network-cleanup.service. Sep 9 00:36:28.696566 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:36:28.696573 systemd[1]: Starting systemd-journald.service... Sep 9 00:36:28.696581 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:36:28.696588 systemd[1]: Starting systemd-resolved.service... Sep 9 00:36:28.696595 systemd[1]: Starting systemd-vconsole-setup.service... Sep 9 00:36:28.696602 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:36:28.696609 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:36:28.696616 kernel: audit: type=1130 audit(1757378188.689:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.696624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:36:28.696636 systemd-journald[290]: Journal started Sep 9 00:36:28.696677 systemd-journald[290]: Runtime Journal (/run/log/journal/f56d086531a8474cbefcb0c0ddead4e8) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:36:28.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.695234 systemd-modules-load[291]: Inserted module 'overlay' Sep 9 00:36:28.698973 systemd[1]: Started systemd-journald.service. Sep 9 00:36:28.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.701533 systemd[1]: Finished systemd-vconsole-setup.service. Sep 9 00:36:28.705442 kernel: audit: type=1130 audit(1757378188.698:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.705465 kernel: audit: type=1130 audit(1757378188.701:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.705525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:36:28.709138 kernel: audit: type=1130 audit(1757378188.705:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.709973 systemd[1]: Starting dracut-cmdline-ask.service... Sep 9 00:36:28.715542 systemd-resolved[292]: Positive Trust Anchors: Sep 9 00:36:28.717009 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:36:28.715558 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:36:28.715586 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:36:28.719916 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 9 00:36:28.727660 kernel: Bridge firewalling registered Sep 9 00:36:28.727680 kernel: audit: type=1130 audit(1757378188.724:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.720721 systemd[1]: Started systemd-resolved.service. Sep 9 00:36:28.725026 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 9 00:36:28.726400 systemd[1]: Reached target nss-lookup.target. Sep 9 00:36:28.735807 systemd[1]: Finished dracut-cmdline-ask.service. Sep 9 00:36:28.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.739595 systemd[1]: Starting dracut-cmdline.service... Sep 9 00:36:28.741403 kernel: audit: type=1130 audit(1757378188.735:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.741424 kernel: SCSI subsystem initialized Sep 9 00:36:28.747985 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:36:28.748024 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:36:28.748040 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 9 00:36:28.748433 dracut-cmdline[309]: dracut-dracut-053 Sep 9 00:36:28.750494 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:36:28.755407 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 9 00:36:28.756596 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:36:28.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.758288 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:36:28.761586 kernel: audit: type=1130 audit(1757378188.756:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.766046 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:36:28.770148 kernel: audit: type=1130 audit(1757378188.765:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.813978 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:36:28.826979 kernel: iscsi: registered transport (tcp) Sep 9 00:36:28.841961 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:36:28.841983 kernel: QLogic iSCSI HBA Driver Sep 9 00:36:28.874989 systemd[1]: Finished dracut-cmdline.service. Sep 9 00:36:28.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.876645 systemd[1]: Starting dracut-pre-udev.service... Sep 9 00:36:28.879382 kernel: audit: type=1130 audit(1757378188.874:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:28.921986 kernel: raid6: neonx8 gen() 13749 MB/s Sep 9 00:36:28.938965 kernel: raid6: neonx8 xor() 9405 MB/s Sep 9 00:36:28.955960 kernel: raid6: neonx4 gen() 13521 MB/s Sep 9 00:36:28.972970 kernel: raid6: neonx4 xor() 11008 MB/s Sep 9 00:36:28.989963 kernel: raid6: neonx2 gen() 13050 MB/s Sep 9 00:36:29.006961 kernel: raid6: neonx2 xor() 10230 MB/s Sep 9 00:36:29.023966 kernel: raid6: neonx1 gen() 10546 MB/s Sep 9 00:36:29.040970 kernel: raid6: neonx1 xor() 8775 MB/s Sep 9 00:36:29.057968 kernel: raid6: int64x8 gen() 6263 MB/s Sep 9 00:36:29.074965 kernel: raid6: int64x8 xor() 3544 MB/s Sep 9 00:36:29.091967 kernel: raid6: int64x4 gen() 7221 MB/s Sep 9 00:36:29.108973 kernel: raid6: int64x4 xor() 3848 MB/s Sep 9 00:36:29.125981 kernel: raid6: int64x2 gen() 6152 MB/s Sep 9 00:36:29.142970 kernel: raid6: int64x2 xor() 3320 MB/s Sep 9 00:36:29.159961 kernel: raid6: int64x1 gen() 5043 MB/s Sep 9 00:36:29.177381 kernel: raid6: int64x1 xor() 2645 MB/s Sep 9 00:36:29.177392 kernel: raid6: using algorithm neonx8 gen() 13749 MB/s Sep 9 00:36:29.177401 kernel: raid6: .... xor() 9405 MB/s, rmw enabled Sep 9 00:36:29.177418 kernel: raid6: using neon recovery algorithm Sep 9 00:36:29.188290 kernel: xor: measuring software checksum speed Sep 9 00:36:29.188309 kernel: 8regs : 16852 MB/sec Sep 9 00:36:29.189405 kernel: 32regs : 20707 MB/sec Sep 9 00:36:29.189415 kernel: arm64_neon : 27794 MB/sec Sep 9 00:36:29.189424 kernel: xor: using function: arm64_neon (27794 MB/sec) Sep 9 00:36:29.241970 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 9 00:36:29.252086 systemd[1]: Finished dracut-pre-udev.service. Sep 9 00:36:29.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:29.252000 audit: BPF prog-id=7 op=LOAD Sep 9 00:36:29.252000 audit: BPF prog-id=8 op=LOAD Sep 9 00:36:29.253867 systemd[1]: Starting systemd-udevd.service... Sep 9 00:36:29.266125 systemd-udevd[492]: Using default interface naming scheme 'v252'. Sep 9 00:36:29.269562 systemd[1]: Started systemd-udevd.service. Sep 9 00:36:29.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:29.274206 systemd[1]: Starting dracut-pre-trigger.service... Sep 9 00:36:29.284966 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 9 00:36:29.315714 systemd[1]: Finished dracut-pre-trigger.service. Sep 9 00:36:29.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:29.317610 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:36:29.351494 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:36:29.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:29.383961 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:36:29.387377 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:36:29.387395 kernel: GPT:9289727 != 19775487 Sep 9 00:36:29.387404 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:36:29.387412 kernel: GPT:9289727 != 19775487 Sep 9 00:36:29.387420 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:36:29.387429 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:36:29.400285 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 9 00:36:29.402766 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (538) Sep 9 00:36:29.402071 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 9 00:36:29.411981 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 9 00:36:29.415311 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 9 00:36:29.418764 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:36:29.421272 systemd[1]: Starting disk-uuid.service... Sep 9 00:36:29.427378 disk-uuid[562]: Primary Header is updated. Sep 9 00:36:29.427378 disk-uuid[562]: Secondary Entries is updated. Sep 9 00:36:29.427378 disk-uuid[562]: Secondary Header is updated. Sep 9 00:36:29.430969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:36:29.433971 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:36:29.435964 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:36:30.435968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:36:30.436283 disk-uuid[563]: The operation has completed successfully. Sep 9 00:36:30.462305 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:36:30.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.462398 systemd[1]: Finished disk-uuid.service. Sep 9 00:36:30.466497 systemd[1]: Starting verity-setup.service... Sep 9 00:36:30.483973 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:36:30.504115 systemd[1]: Found device dev-mapper-usr.device. Sep 9 00:36:30.506468 systemd[1]: Mounting sysusr-usr.mount... Sep 9 00:36:30.508835 systemd[1]: Finished verity-setup.service. Sep 9 00:36:30.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.552973 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 9 00:36:30.553013 systemd[1]: Mounted sysusr-usr.mount. Sep 9 00:36:30.553871 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 9 00:36:30.554586 systemd[1]: Starting ignition-setup.service... Sep 9 00:36:30.556426 systemd[1]: Starting parse-ip-for-networkd.service... Sep 9 00:36:30.565499 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:36:30.565532 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:36:30.565542 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:36:30.572647 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:36:30.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.578490 systemd[1]: Finished ignition-setup.service. Sep 9 00:36:30.579901 systemd[1]: Starting ignition-fetch-offline.service... Sep 9 00:36:30.628874 ignition[652]: Ignition 2.14.0 Sep 9 00:36:30.628884 ignition[652]: Stage: fetch-offline Sep 9 00:36:30.628920 ignition[652]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:36:30.628929 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:36:30.629062 ignition[652]: parsed url from cmdline: "" Sep 9 00:36:30.629065 ignition[652]: no config URL provided Sep 9 00:36:30.629069 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:36:30.629076 ignition[652]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:36:30.629093 ignition[652]: op(1): [started] loading QEMU firmware config module Sep 9 00:36:30.629097 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:36:30.635289 ignition[652]: op(1): [finished] loading QEMU firmware config module Sep 9 00:36:30.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.660000 audit: BPF prog-id=9 op=LOAD Sep 9 00:36:30.660121 systemd[1]: Finished parse-ip-for-networkd.service. Sep 9 00:36:30.662459 systemd[1]: Starting systemd-networkd.service... Sep 9 00:36:30.677626 ignition[652]: parsing config with SHA512: 7945d61c25ad073b746262ee662317921cd498af0c60baa9c5d798e820ddee4f7c55ec27324c3c38b06ed4ec2c3d9259c1b5a4ac3ad9357d305631487a038d96 Sep 9 00:36:30.681673 systemd-networkd[739]: lo: Link UP Sep 9 00:36:30.681688 systemd-networkd[739]: lo: Gained carrier Sep 9 00:36:30.682430 systemd-networkd[739]: Enumeration completed Sep 9 00:36:30.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.686572 ignition[652]: fetch-offline: fetch-offline passed Sep 9 00:36:30.682786 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:36:30.686621 ignition[652]: Ignition finished successfully Sep 9 00:36:30.682983 systemd[1]: Started systemd-networkd.service. Sep 9 00:36:30.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.684176 systemd-networkd[739]: eth0: Link UP Sep 9 00:36:30.684180 systemd-networkd[739]: eth0: Gained carrier Sep 9 00:36:30.684871 systemd[1]: Reached target network.target. Sep 9 00:36:30.686105 unknown[652]: fetched base config from "system" Sep 9 00:36:30.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.686111 unknown[652]: fetched user config from "qemu" Sep 9 00:36:30.686324 systemd[1]: Starting iscsiuio.service... Sep 9 00:36:30.690672 systemd[1]: Finished ignition-fetch-offline.service. Sep 9 00:36:30.691621 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:36:30.692973 systemd[1]: Starting ignition-kargs.service... Sep 9 00:36:30.700976 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:36:30.700976 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 9 00:36:30.700976 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 9 00:36:30.700976 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 9 00:36:30.700976 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:36:30.700976 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 9 00:36:30.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.693939 systemd[1]: Started iscsiuio.service. Sep 9 00:36:30.701922 ignition[743]: Ignition 2.14.0 Sep 9 00:36:30.696445 systemd[1]: Starting iscsid.service... Sep 9 00:36:30.701928 ignition[743]: Stage: kargs Sep 9 00:36:30.702437 systemd[1]: Started iscsid.service. Sep 9 00:36:30.702035 ignition[743]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:36:30.703037 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:36:30.702044 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:36:30.709486 systemd[1]: Finished ignition-kargs.service. Sep 9 00:36:30.703647 ignition[743]: kargs: kargs passed Sep 9 00:36:30.711756 systemd[1]: Starting dracut-initqueue.service... Sep 9 00:36:30.703690 ignition[743]: Ignition finished successfully Sep 9 00:36:30.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.714813 systemd[1]: Starting ignition-disks.service... Sep 9 00:36:30.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.722285 ignition[752]: Ignition 2.14.0 Sep 9 00:36:30.724281 systemd[1]: Finished dracut-initqueue.service. Sep 9 00:36:30.722291 ignition[752]: Stage: disks Sep 9 00:36:30.725826 systemd[1]: Finished ignition-disks.service. Sep 9 00:36:30.722380 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:36:30.727217 systemd[1]: Reached target initrd-root-device.target. Sep 9 00:36:30.722389 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:36:30.728399 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:36:30.723336 ignition[752]: disks: disks passed Sep 9 00:36:30.729484 systemd[1]: Reached target local-fs.target. Sep 9 00:36:30.723378 ignition[752]: Ignition finished successfully Sep 9 00:36:30.730646 systemd[1]: Reached target remote-fs-pre.target. Sep 9 00:36:30.731639 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:36:30.732894 systemd[1]: Reached target remote-fs.target. Sep 9 00:36:30.734275 systemd[1]: Reached target sysinit.target. Sep 9 00:36:30.735543 systemd[1]: Reached target basic.target. Sep 9 00:36:30.737797 systemd[1]: Starting dracut-pre-mount.service... Sep 9 00:36:30.746443 systemd[1]: Finished dracut-pre-mount.service. Sep 9 00:36:30.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.747739 systemd[1]: Starting systemd-fsck-root.service... Sep 9 00:36:30.759191 systemd-fsck[773]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 9 00:36:30.762674 systemd[1]: Finished systemd-fsck-root.service. Sep 9 00:36:30.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.764323 systemd[1]: Mounting sysroot.mount... Sep 9 00:36:30.771966 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 9 00:36:30.773421 systemd[1]: Mounted sysroot.mount. Sep 9 00:36:30.774479 systemd[1]: Reached target initrd-root-fs.target. Sep 9 00:36:30.777099 systemd[1]: Mounting sysroot-usr.mount... Sep 9 00:36:30.777812 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 9 00:36:30.777848 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:36:30.777928 systemd[1]: Reached target ignition-diskful.target. Sep 9 00:36:30.779732 systemd[1]: Mounted sysroot-usr.mount. Sep 9 00:36:30.781204 systemd[1]: Starting initrd-setup-root.service... Sep 9 00:36:30.785396 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:36:30.788855 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:36:30.791919 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:36:30.796019 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:36:30.826364 systemd[1]: Finished initrd-setup-root.service. Sep 9 00:36:30.827722 systemd[1]: Starting ignition-mount.service... Sep 9 00:36:30.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.828887 systemd[1]: Starting sysroot-boot.service... Sep 9 00:36:30.832989 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Sep 9 00:36:30.841666 ignition[826]: INFO : Ignition 2.14.0 Sep 9 00:36:30.841666 ignition[826]: INFO : Stage: mount Sep 9 00:36:30.842914 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:36:30.842914 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:36:30.842914 ignition[826]: INFO : mount: mount passed Sep 9 00:36:30.842914 ignition[826]: INFO : Ignition finished successfully Sep 9 00:36:30.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:30.845147 systemd[1]: Finished ignition-mount.service. Sep 9 00:36:30.848825 systemd[1]: Finished sysroot-boot.service. Sep 9 00:36:30.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:31.359442 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.92 Sep 9 00:36:31.359454 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Sep 9 00:36:31.517806 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 9 00:36:31.527970 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (835) Sep 9 00:36:31.530020 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:36:31.530057 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:36:31.530076 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:36:31.535095 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 9 00:36:31.538412 systemd[1]: Starting ignition-files.service... Sep 9 00:36:31.558689 ignition[855]: INFO : Ignition 2.14.0 Sep 9 00:36:31.558689 ignition[855]: INFO : Stage: files Sep 9 00:36:31.560329 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:36:31.560329 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:36:31.560329 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:36:31.565976 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:36:31.565976 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:36:31.573471 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:36:31.575510 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:36:31.577077 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:36:31.577077 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:36:31.577077 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:36:31.577077 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:36:31.577077 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 00:36:31.575847 unknown[855]: wrote ssh authorized keys file for user: core Sep 9 00:36:31.650172 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:36:31.918821 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:36:31.921254 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:36:31.921254 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 00:36:32.034152 systemd-networkd[739]: eth0: Gained IPv6LL Sep 9 00:36:32.120583 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:36:32.216350 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:36:32.233365 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:36:32.233365 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:36:32.233365 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:36:32.233365 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:36:32.233365 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:36:32.233365 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:36:32.233365 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 00:36:32.699803 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 9 00:36:33.147606 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:36:33.147606 ignition[855]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:36:33.155185 ignition[855]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:36:33.184804 ignition[855]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:36:33.184804 ignition[855]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:36:33.184804 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:36:33.184804 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:36:33.184804 ignition[855]: INFO : files: files passed Sep 9 00:36:33.184804 ignition[855]: INFO : Ignition finished successfully Sep 9 00:36:33.195960 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 9 00:36:33.195980 kernel: audit: type=1130 audit(1757378193.184:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.183968 systemd[1]: Finished ignition-files.service. Sep 9 00:36:33.186278 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 9 00:36:33.189761 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 9 00:36:33.205143 kernel: audit: type=1130 audit(1757378193.195:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.205163 kernel: audit: type=1131 audit(1757378193.195:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.205380 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 9 00:36:33.210127 kernel: audit: type=1130 audit(1757378193.196:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.190434 systemd[1]: Starting ignition-quench.service... Sep 9 00:36:33.211289 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:36:33.195419 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:36:33.195498 systemd[1]: Finished ignition-quench.service. Sep 9 00:36:33.196801 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 9 00:36:33.197831 systemd[1]: Reached target ignition-complete.target. Sep 9 00:36:33.204176 systemd[1]: Starting initrd-parse-etc.service... Sep 9 00:36:33.218606 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:36:33.225412 kernel: audit: type=1130 audit(1757378193.218:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.225438 kernel: audit: type=1131 audit(1757378193.218:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.218693 systemd[1]: Finished initrd-parse-etc.service. Sep 9 00:36:33.219514 systemd[1]: Reached target initrd-fs.target. Sep 9 00:36:33.222478 systemd[1]: Reached target initrd.target. Sep 9 00:36:33.227833 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 9 00:36:33.229609 systemd[1]: Starting dracut-pre-pivot.service... Sep 9 00:36:33.239564 systemd[1]: Finished dracut-pre-pivot.service. Sep 9 00:36:33.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.241100 systemd[1]: Starting initrd-cleanup.service... Sep 9 00:36:33.243975 kernel: audit: type=1130 audit(1757378193.239:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.250599 systemd[1]: Stopped target nss-lookup.target. Sep 9 00:36:33.251319 systemd[1]: Stopped target remote-cryptsetup.target. Sep 9 00:36:33.252570 systemd[1]: Stopped target timers.target. Sep 9 00:36:33.253553 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:36:33.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.253839 systemd[1]: Stopped dracut-pre-pivot.service. Sep 9 00:36:33.259231 kernel: audit: type=1131 audit(1757378193.254:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.256010 systemd[1]: Stopped target initrd.target. Sep 9 00:36:33.258819 systemd[1]: Stopped target basic.target. Sep 9 00:36:33.259882 systemd[1]: Stopped target ignition-complete.target. Sep 9 00:36:33.260938 systemd[1]: Stopped target ignition-diskful.target. Sep 9 00:36:33.262113 systemd[1]: Stopped target initrd-root-device.target. Sep 9 00:36:33.263290 systemd[1]: Stopped target remote-fs.target. Sep 9 00:36:33.264333 systemd[1]: Stopped target remote-fs-pre.target. Sep 9 00:36:33.265509 systemd[1]: Stopped target sysinit.target. Sep 9 00:36:33.266768 systemd[1]: Stopped target local-fs.target. Sep 9 00:36:33.267863 systemd[1]: Stopped target local-fs-pre.target. Sep 9 00:36:33.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.269094 systemd[1]: Stopped target swap.target. Sep 9 00:36:33.274523 kernel: audit: type=1131 audit(1757378193.270:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.270316 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:36:33.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.270422 systemd[1]: Stopped dracut-pre-mount.service. Sep 9 00:36:33.278996 kernel: audit: type=1131 audit(1757378193.274:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.271142 systemd[1]: Stopped target cryptsetup.target. Sep 9 00:36:33.274116 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:36:33.274212 systemd[1]: Stopped dracut-initqueue.service. Sep 9 00:36:33.275154 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:36:33.275240 systemd[1]: Stopped ignition-fetch-offline.service. Sep 9 00:36:33.278610 systemd[1]: Stopped target paths.target. Sep 9 00:36:33.279549 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:36:33.284006 systemd[1]: Stopped systemd-ask-password-console.path. Sep 9 00:36:33.285413 systemd[1]: Stopped target slices.target. Sep 9 00:36:33.286655 systemd[1]: Stopped target sockets.target. Sep 9 00:36:33.287635 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:36:33.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.287747 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 9 00:36:33.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.288763 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:36:33.288851 systemd[1]: Stopped ignition-files.service. Sep 9 00:36:33.292441 iscsid[745]: iscsid shutting down. Sep 9 00:36:33.290851 systemd[1]: Stopping ignition-mount.service... Sep 9 00:36:33.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.292032 systemd[1]: Stopping iscsid.service... Sep 9 00:36:33.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.292849 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:36:33.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.292973 systemd[1]: Stopped kmod-static-nodes.service. Sep 9 00:36:33.294869 systemd[1]: Stopping sysroot-boot.service... Sep 9 00:36:33.295473 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:36:33.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.301934 ignition[896]: INFO : Ignition 2.14.0 Sep 9 00:36:33.301934 ignition[896]: INFO : Stage: umount Sep 9 00:36:33.301934 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:36:33.301934 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:36:33.301934 ignition[896]: INFO : umount: umount passed Sep 9 00:36:33.301934 ignition[896]: INFO : Ignition finished successfully Sep 9 00:36:33.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.295597 systemd[1]: Stopped systemd-udev-trigger.service. Sep 9 00:36:33.296416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:36:33.296504 systemd[1]: Stopped dracut-pre-trigger.service. Sep 9 00:36:33.299864 systemd[1]: iscsid.service: Deactivated successfully. Sep 9 00:36:33.299990 systemd[1]: Stopped iscsid.service. Sep 9 00:36:33.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.301340 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:36:33.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.301401 systemd[1]: Closed iscsid.socket. Sep 9 00:36:33.303006 systemd[1]: Stopping iscsiuio.service... Sep 9 00:36:33.303772 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:36:33.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.303865 systemd[1]: Stopped ignition-mount.service. Sep 9 00:36:33.305684 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:36:33.306079 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 9 00:36:33.306151 systemd[1]: Stopped iscsiuio.service. Sep 9 00:36:33.309605 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:36:33.309685 systemd[1]: Finished initrd-cleanup.service. Sep 9 00:36:33.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.312607 systemd[1]: Stopped target network.target. Sep 9 00:36:33.314304 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:36:33.314335 systemd[1]: Closed iscsiuio.socket. Sep 9 00:36:33.317143 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:36:33.317186 systemd[1]: Stopped ignition-disks.service. Sep 9 00:36:33.317784 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:36:33.317822 systemd[1]: Stopped ignition-kargs.service. Sep 9 00:36:33.320465 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:36:33.320502 systemd[1]: Stopped ignition-setup.service. Sep 9 00:36:33.322574 systemd[1]: Stopping systemd-networkd.service... Sep 9 00:36:33.323209 systemd[1]: Stopping systemd-resolved.service... Sep 9 00:36:33.328728 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:36:33.328826 systemd[1]: Stopped systemd-resolved.service. Sep 9 00:36:33.337032 systemd-networkd[739]: eth0: DHCPv6 lease lost Sep 9 00:36:33.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.340066 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:36:33.340191 systemd[1]: Stopped systemd-networkd.service. Sep 9 00:36:33.340997 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:36:33.341024 systemd[1]: Closed systemd-networkd.socket. Sep 9 00:36:33.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.342334 systemd[1]: Stopping network-cleanup.service... Sep 9 00:36:33.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.345000 audit: BPF prog-id=6 op=UNLOAD Sep 9 00:36:33.343389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:36:33.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.343441 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 9 00:36:33.344768 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:36:33.350000 audit: BPF prog-id=9 op=UNLOAD Sep 9 00:36:33.344803 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:36:33.346439 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:36:33.346480 systemd[1]: Stopped systemd-modules-load.service. Sep 9 00:36:33.347465 systemd[1]: Stopping systemd-udevd.service... Sep 9 00:36:33.352357 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:36:33.355804 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:36:33.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.355896 systemd[1]: Stopped network-cleanup.service. Sep 9 00:36:33.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.359101 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:36:33.359212 systemd[1]: Stopped systemd-udevd.service. Sep 9 00:36:33.360470 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:36:33.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.360503 systemd[1]: Closed systemd-udevd-control.socket. Sep 9 00:36:33.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.361193 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:36:33.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.361222 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 9 00:36:33.362416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:36:33.362453 systemd[1]: Stopped dracut-pre-udev.service. Sep 9 00:36:33.363438 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:36:33.363471 systemd[1]: Stopped dracut-cmdline.service. Sep 9 00:36:33.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.364572 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:36:33.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:33.364604 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 9 00:36:33.366258 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 9 00:36:33.366850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:36:33.366893 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 9 00:36:33.368281 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:36:33.368374 systemd[1]: Stopped sysroot-boot.service. Sep 9 00:36:33.369278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:36:33.369317 systemd[1]: Stopped initrd-setup-root.service. Sep 9 00:36:33.371143 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:36:33.371216 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 9 00:36:33.372195 systemd[1]: Reached target initrd-switch-root.target. Sep 9 00:36:33.374035 systemd[1]: Starting initrd-switch-root.service... Sep 9 00:36:33.379481 systemd[1]: Switching root. Sep 9 00:36:33.381000 audit: BPF prog-id=5 op=UNLOAD Sep 9 00:36:33.381000 audit: BPF prog-id=4 op=UNLOAD Sep 9 00:36:33.381000 audit: BPF prog-id=3 op=UNLOAD Sep 9 00:36:33.381000 audit: BPF prog-id=8 op=UNLOAD Sep 9 00:36:33.381000 audit: BPF prog-id=7 op=UNLOAD Sep 9 00:36:33.386856 systemd-journald[290]: Journal stopped Sep 9 00:36:35.465105 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Sep 9 00:36:35.465163 kernel: SELinux: Class mctp_socket not defined in policy. Sep 9 00:36:35.465175 kernel: SELinux: Class anon_inode not defined in policy. Sep 9 00:36:35.465185 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 9 00:36:35.465194 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:36:35.465204 kernel: SELinux: policy capability open_perms=1 Sep 9 00:36:35.465214 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:36:35.465228 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:36:35.465244 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:36:35.465253 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:36:35.465263 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:36:35.465272 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:36:35.465284 systemd[1]: Successfully loaded SELinux policy in 36.343ms. Sep 9 00:36:35.465299 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.906ms. Sep 9 00:36:35.465311 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:36:35.465323 systemd[1]: Detected virtualization kvm. Sep 9 00:36:35.465332 systemd[1]: Detected architecture arm64. Sep 9 00:36:35.465343 systemd[1]: Detected first boot. Sep 9 00:36:35.465359 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:36:35.465369 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 9 00:36:35.465379 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:36:35.465389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:36:35.465401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:36:35.465413 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:36:35.465424 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:36:35.465436 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 9 00:36:35.465446 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 9 00:36:35.465457 systemd[1]: Created slice system-addon\x2drun.slice. Sep 9 00:36:35.465467 systemd[1]: Created slice system-getty.slice. Sep 9 00:36:35.465478 systemd[1]: Created slice system-modprobe.slice. Sep 9 00:36:35.465492 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 9 00:36:35.465502 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 9 00:36:35.465514 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 9 00:36:35.465524 systemd[1]: Created slice user.slice. Sep 9 00:36:35.465535 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:36:35.465545 systemd[1]: Started systemd-ask-password-wall.path. Sep 9 00:36:35.465555 systemd[1]: Set up automount boot.automount. Sep 9 00:36:35.465565 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 9 00:36:35.465575 systemd[1]: Reached target integritysetup.target. Sep 9 00:36:35.465586 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:36:35.465597 systemd[1]: Reached target remote-fs.target. Sep 9 00:36:35.465608 systemd[1]: Reached target slices.target. Sep 9 00:36:35.465619 systemd[1]: Reached target swap.target. Sep 9 00:36:35.465629 systemd[1]: Reached target torcx.target. Sep 9 00:36:35.465639 systemd[1]: Reached target veritysetup.target. Sep 9 00:36:35.465649 systemd[1]: Listening on systemd-coredump.socket. Sep 9 00:36:35.465659 systemd[1]: Listening on systemd-initctl.socket. Sep 9 00:36:35.465669 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:36:35.465679 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:36:35.465689 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:36:35.465701 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:36:35.465711 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:36:35.465727 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:36:35.465741 systemd[1]: Listening on systemd-userdbd.socket. Sep 9 00:36:35.465753 systemd[1]: Mounting dev-hugepages.mount... Sep 9 00:36:35.465764 systemd[1]: Mounting dev-mqueue.mount... Sep 9 00:36:35.465774 systemd[1]: Mounting media.mount... Sep 9 00:36:35.465785 systemd[1]: Mounting sys-kernel-debug.mount... Sep 9 00:36:35.465797 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 9 00:36:35.465808 systemd[1]: Mounting tmp.mount... Sep 9 00:36:35.465818 systemd[1]: Starting flatcar-tmpfiles.service... Sep 9 00:36:35.465828 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:36:35.465839 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:36:35.465849 systemd[1]: Starting modprobe@configfs.service... Sep 9 00:36:35.465859 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:36:35.465869 systemd[1]: Starting modprobe@drm.service... Sep 9 00:36:35.465879 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:36:35.465889 systemd[1]: Starting modprobe@fuse.service... Sep 9 00:36:35.465900 systemd[1]: Starting modprobe@loop.service... Sep 9 00:36:35.465911 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:36:35.465922 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 9 00:36:35.465932 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 9 00:36:35.465942 systemd[1]: Starting systemd-journald.service... Sep 9 00:36:35.465959 kernel: loop: module loaded Sep 9 00:36:35.465969 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:36:35.465979 systemd[1]: Starting systemd-network-generator.service... Sep 9 00:36:35.465990 systemd[1]: Starting systemd-remount-fs.service... Sep 9 00:36:35.466001 kernel: fuse: init (API version 7.34) Sep 9 00:36:35.466011 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:36:35.466022 systemd[1]: Mounted dev-hugepages.mount. Sep 9 00:36:35.466032 systemd[1]: Mounted dev-mqueue.mount. Sep 9 00:36:35.466042 systemd[1]: Mounted media.mount. Sep 9 00:36:35.466052 systemd[1]: Mounted sys-kernel-debug.mount. Sep 9 00:36:35.466062 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 9 00:36:35.466074 systemd-journald[1034]: Journal started Sep 9 00:36:35.466116 systemd-journald[1034]: Runtime Journal (/run/log/journal/f56d086531a8474cbefcb0c0ddead4e8) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:36:35.398000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:36:35.398000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 9 00:36:35.462000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:36:35.462000 audit[1034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff9e55370 a2=4000 a3=1 items=0 ppid=1 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:36:35.462000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:36:35.469757 systemd[1]: Started systemd-journald.service. Sep 9 00:36:35.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.468522 systemd[1]: Mounted tmp.mount. Sep 9 00:36:35.469395 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:36:35.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.470403 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:36:35.470551 systemd[1]: Finished modprobe@configfs.service. Sep 9 00:36:35.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.471565 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:36:35.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.471736 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:36:35.472604 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:36:35.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.473066 systemd[1]: Finished modprobe@drm.service. Sep 9 00:36:35.473877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:36:35.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.474247 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:36:35.475131 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:36:35.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.475445 systemd[1]: Finished modprobe@fuse.service. Sep 9 00:36:35.476454 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:36:35.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.476813 systemd[1]: Finished modprobe@loop.service. Sep 9 00:36:35.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.477986 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:36:35.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.479256 systemd[1]: Finished systemd-network-generator.service. Sep 9 00:36:35.480432 systemd[1]: Finished systemd-remount-fs.service. Sep 9 00:36:35.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.481415 systemd[1]: Reached target network-pre.target. Sep 9 00:36:35.483286 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 9 00:36:35.484866 systemd[1]: Mounting sys-kernel-config.mount... Sep 9 00:36:35.485500 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:36:35.487429 systemd[1]: Starting systemd-hwdb-update.service... Sep 9 00:36:35.489267 systemd[1]: Starting systemd-journal-flush.service... Sep 9 00:36:35.489922 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:36:35.490998 systemd[1]: Starting systemd-random-seed.service... Sep 9 00:36:35.491755 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:36:35.492921 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:36:35.495454 systemd[1]: Finished flatcar-tmpfiles.service. Sep 9 00:36:35.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.498415 systemd-journald[1034]: Time spent on flushing to /var/log/journal/f56d086531a8474cbefcb0c0ddead4e8 is 11.509ms for 940 entries. Sep 9 00:36:35.498415 systemd-journald[1034]: System Journal (/var/log/journal/f56d086531a8474cbefcb0c0ddead4e8) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:36:35.515939 systemd-journald[1034]: Received client request to flush runtime journal. Sep 9 00:36:35.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.498427 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 9 00:36:35.500652 systemd[1]: Mounted sys-kernel-config.mount. Sep 9 00:36:35.502421 systemd[1]: Finished systemd-random-seed.service. Sep 9 00:36:35.503539 systemd[1]: Reached target first-boot-complete.target. Sep 9 00:36:35.505463 systemd[1]: Starting systemd-sysusers.service... Sep 9 00:36:35.511245 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:36:35.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.516880 systemd[1]: Finished systemd-journal-flush.service. Sep 9 00:36:35.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.525837 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:36:35.528059 systemd[1]: Starting systemd-udev-settle.service... Sep 9 00:36:35.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.531410 systemd[1]: Finished systemd-sysusers.service. Sep 9 00:36:35.533356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:36:35.536162 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 00:36:35.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.553777 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:36:35.861757 systemd[1]: Finished systemd-hwdb-update.service. Sep 9 00:36:35.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.863696 systemd[1]: Starting systemd-udevd.service... Sep 9 00:36:35.879309 systemd-udevd[1088]: Using default interface naming scheme 'v252'. Sep 9 00:36:35.890808 systemd[1]: Started systemd-udevd.service. Sep 9 00:36:35.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.892894 systemd[1]: Starting systemd-networkd.service... Sep 9 00:36:35.899310 systemd[1]: Starting systemd-userdbd.service... Sep 9 00:36:35.925237 systemd[1]: Found device dev-ttyAMA0.device. Sep 9 00:36:35.939810 systemd[1]: Started systemd-userdbd.service. Sep 9 00:36:35.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.960324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:36:35.987472 systemd-networkd[1095]: lo: Link UP Sep 9 00:36:35.987488 systemd-networkd[1095]: lo: Gained carrier Sep 9 00:36:35.987928 systemd-networkd[1095]: Enumeration completed Sep 9 00:36:35.988185 systemd[1]: Started systemd-networkd.service. Sep 9 00:36:35.988190 systemd-networkd[1095]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:36:35.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.989881 systemd-networkd[1095]: eth0: Link UP Sep 9 00:36:35.989894 systemd-networkd[1095]: eth0: Gained carrier Sep 9 00:36:35.997373 systemd[1]: Finished systemd-udev-settle.service. Sep 9 00:36:35.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:35.999334 systemd[1]: Starting lvm2-activation-early.service... Sep 9 00:36:36.010071 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:36:36.011118 systemd-networkd[1095]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:36:36.041857 systemd[1]: Finished lvm2-activation-early.service. Sep 9 00:36:36.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.042767 systemd[1]: Reached target cryptsetup.target. Sep 9 00:36:36.044614 systemd[1]: Starting lvm2-activation.service... Sep 9 00:36:36.048381 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:36:36.069885 systemd[1]: Finished lvm2-activation.service. Sep 9 00:36:36.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.070712 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:36:36.071431 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:36:36.071458 systemd[1]: Reached target local-fs.target. Sep 9 00:36:36.072044 systemd[1]: Reached target machines.target. Sep 9 00:36:36.073835 systemd[1]: Starting ldconfig.service... Sep 9 00:36:36.074886 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.074941 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:36:36.076320 systemd[1]: Starting systemd-boot-update.service... Sep 9 00:36:36.078128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 9 00:36:36.080123 systemd[1]: Starting systemd-machine-id-commit.service... Sep 9 00:36:36.082107 systemd[1]: Starting systemd-sysext.service... Sep 9 00:36:36.083256 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1127 (bootctl) Sep 9 00:36:36.084527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 9 00:36:36.088188 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 9 00:36:36.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.092777 systemd[1]: Unmounting usr-share-oem.mount... Sep 9 00:36:36.097137 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 9 00:36:36.097383 systemd[1]: Unmounted usr-share-oem.mount. Sep 9 00:36:36.157110 systemd[1]: Finished systemd-machine-id-commit.service. Sep 9 00:36:36.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.159979 kernel: loop0: detected capacity change from 0 to 203944 Sep 9 00:36:36.175074 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:36:36.182579 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) Sep 9 00:36:36.182579 systemd-fsck[1139]: /dev/vda1: 236 files, 117310/258078 clusters Sep 9 00:36:36.184631 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 9 00:36:36.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.192974 kernel: loop1: detected capacity change from 0 to 203944 Sep 9 00:36:36.203305 (sd-sysext)[1145]: Using extensions 'kubernetes'. Sep 9 00:36:36.204097 (sd-sysext)[1145]: Merged extensions into '/usr'. Sep 9 00:36:36.223653 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.224990 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:36:36.226850 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:36:36.228734 systemd[1]: Starting modprobe@loop.service... Sep 9 00:36:36.229650 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.229799 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:36:36.230536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:36:36.230702 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:36:36.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.232045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:36:36.232183 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:36:36.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.233392 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:36:36.235269 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:36:36.235413 systemd[1]: Finished modprobe@loop.service. Sep 9 00:36:36.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.236479 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.294137 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:36:36.297170 systemd[1]: Finished ldconfig.service. Sep 9 00:36:36.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.460058 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:36:36.461844 systemd[1]: Mounting boot.mount... Sep 9 00:36:36.463763 systemd[1]: Mounting usr-share-oem.mount... Sep 9 00:36:36.470091 systemd[1]: Mounted boot.mount. Sep 9 00:36:36.471055 systemd[1]: Mounted usr-share-oem.mount. Sep 9 00:36:36.473005 systemd[1]: Finished systemd-sysext.service. Sep 9 00:36:36.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.475039 systemd[1]: Starting ensure-sysext.service... Sep 9 00:36:36.476878 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 9 00:36:36.480201 systemd[1]: Finished systemd-boot-update.service. Sep 9 00:36:36.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.482649 systemd[1]: Reloading. Sep 9 00:36:36.485772 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 9 00:36:36.486517 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:36:36.487828 systemd-tmpfiles[1162]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:36:36.520298 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-09-09T00:36:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:36:36.520330 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-09-09T00:36:36Z" level=info msg="torcx already run" Sep 9 00:36:36.582555 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:36:36.582578 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:36:36.597780 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:36:36.642550 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 9 00:36:36.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.646528 systemd[1]: Starting audit-rules.service... Sep 9 00:36:36.648379 systemd[1]: Starting clean-ca-certificates.service... Sep 9 00:36:36.650516 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 9 00:36:36.652937 systemd[1]: Starting systemd-resolved.service... Sep 9 00:36:36.655073 systemd[1]: Starting systemd-timesyncd.service... Sep 9 00:36:36.657886 systemd[1]: Starting systemd-update-utmp.service... Sep 9 00:36:36.659861 systemd[1]: Finished clean-ca-certificates.service. Sep 9 00:36:36.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.664000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.669238 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.670532 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:36:36.672290 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:36:36.674037 systemd[1]: Starting modprobe@loop.service... Sep 9 00:36:36.674635 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.674771 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:36:36.674882 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:36:36.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.675827 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 9 00:36:36.677157 systemd[1]: Finished systemd-update-utmp.service. Sep 9 00:36:36.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.678292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:36:36.678428 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:36:36.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.681656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:36:36.681818 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:36:36.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.683046 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:36:36.683197 systemd[1]: Finished modprobe@loop.service. Sep 9 00:36:36.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.685832 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.687171 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:36:36.689668 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:36:36.691540 systemd[1]: Starting modprobe@loop.service... Sep 9 00:36:36.692208 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.692352 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:36:36.693672 systemd[1]: Starting systemd-update-done.service... Sep 9 00:36:36.694502 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:36:36.695439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:36:36.695587 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:36:36.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.697138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:36:36.697276 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:36:36.698392 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:36:36.698549 systemd[1]: Finished modprobe@loop.service. Sep 9 00:36:36.699567 systemd[1]: Finished systemd-update-done.service. Sep 9 00:36:36.702527 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.703714 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:36:36.705596 systemd[1]: Starting modprobe@drm.service... Sep 9 00:36:36.707479 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:36:36.709267 systemd[1]: Starting modprobe@loop.service... Sep 9 00:36:36.709931 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.710081 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:36:36.711463 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 9 00:36:36.712427 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:36:36.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.713511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:36:36.713665 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:36:36.714803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:36:36.714938 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:36:36.716142 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:36:36.716278 systemd[1]: Finished modprobe@loop.service. Sep 9 00:36:36.722640 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:36:36.723000 systemd[1]: Finished modprobe@drm.service. Sep 9 00:36:36.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:36:36.728776 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:36:36.728879 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.729000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 9 00:36:36.729000 audit[1277]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff8191d90 a2=420 a3=0 items=0 ppid=1229 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:36:36.729000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 9 00:36:36.731009 augenrules[1277]: No rules Sep 9 00:36:36.731458 systemd[1]: Finished ensure-sysext.service. Sep 9 00:36:36.732466 systemd[1]: Finished audit-rules.service. Sep 9 00:36:36.739570 systemd[1]: Started systemd-timesyncd.service. Sep 9 00:36:36.740308 systemd-timesyncd[1235]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:36:36.740362 systemd-timesyncd[1235]: Initial clock synchronization to Tue 2025-09-09 00:36:36.393939 UTC. Sep 9 00:36:36.740711 systemd[1]: Reached target time-set.target. Sep 9 00:36:36.741267 systemd-resolved[1234]: Positive Trust Anchors: Sep 9 00:36:36.741277 systemd-resolved[1234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:36:36.741303 systemd-resolved[1234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:36:36.749449 systemd-resolved[1234]: Defaulting to hostname 'linux'. Sep 9 00:36:36.750880 systemd[1]: Started systemd-resolved.service. Sep 9 00:36:36.751636 systemd[1]: Reached target network.target. Sep 9 00:36:36.752249 systemd[1]: Reached target nss-lookup.target. Sep 9 00:36:36.752842 systemd[1]: Reached target sysinit.target. Sep 9 00:36:36.753531 systemd[1]: Started motdgen.path. Sep 9 00:36:36.754177 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 9 00:36:36.755182 systemd[1]: Started logrotate.timer. Sep 9 00:36:36.755838 systemd[1]: Started mdadm.timer. Sep 9 00:36:36.756415 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 9 00:36:36.757055 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:36:36.757077 systemd[1]: Reached target paths.target. Sep 9 00:36:36.757609 systemd[1]: Reached target timers.target. Sep 9 00:36:36.758530 systemd[1]: Listening on dbus.socket. Sep 9 00:36:36.760305 systemd[1]: Starting docker.socket... Sep 9 00:36:36.761939 systemd[1]: Listening on sshd.socket. Sep 9 00:36:36.762725 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:36:36.763139 systemd[1]: Listening on docker.socket. Sep 9 00:36:36.763823 systemd[1]: Reached target sockets.target. Sep 9 00:36:36.764666 systemd[1]: Reached target basic.target. Sep 9 00:36:36.765493 systemd[1]: System is tainted: cgroupsv1 Sep 9 00:36:36.765532 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.765551 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:36:36.766572 systemd[1]: Starting containerd.service... Sep 9 00:36:36.768313 systemd[1]: Starting dbus.service... Sep 9 00:36:36.769866 systemd[1]: Starting enable-oem-cloudinit.service... Sep 9 00:36:36.771842 systemd[1]: Starting extend-filesystems.service... Sep 9 00:36:36.772734 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 9 00:36:36.773860 systemd[1]: Starting motdgen.service... Sep 9 00:36:36.775626 systemd[1]: Starting prepare-helm.service... Sep 9 00:36:36.777474 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 9 00:36:36.779706 jq[1292]: false Sep 9 00:36:36.779266 systemd[1]: Starting sshd-keygen.service... Sep 9 00:36:36.781627 systemd[1]: Starting systemd-logind.service... Sep 9 00:36:36.782260 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:36:36.782334 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:36:36.783496 systemd[1]: Starting update-engine.service... Sep 9 00:36:36.785138 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 9 00:36:36.787376 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:36:36.789462 jq[1307]: true Sep 9 00:36:36.789282 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 9 00:36:36.795456 jq[1315]: true Sep 9 00:36:36.801438 tar[1314]: linux-arm64/helm Sep 9 00:36:36.803460 extend-filesystems[1293]: Found loop1 Sep 9 00:36:36.803749 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:36:36.804053 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 9 00:36:36.807028 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:36:36.807240 systemd[1]: Finished motdgen.service. Sep 9 00:36:36.811879 dbus-daemon[1291]: [system] SELinux support is enabled Sep 9 00:36:36.812061 systemd[1]: Started dbus.service. Sep 9 00:36:36.814376 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:36:36.814398 systemd[1]: Reached target system-config.target. Sep 9 00:36:36.815108 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:36:36.815129 systemd[1]: Reached target user-config.target. Sep 9 00:36:36.815450 extend-filesystems[1293]: Found vda Sep 9 00:36:36.820158 extend-filesystems[1293]: Found vda1 Sep 9 00:36:36.821163 extend-filesystems[1293]: Found vda2 Sep 9 00:36:36.822049 extend-filesystems[1293]: Found vda3 Sep 9 00:36:36.822049 extend-filesystems[1293]: Found usr Sep 9 00:36:36.823707 extend-filesystems[1293]: Found vda4 Sep 9 00:36:36.823707 extend-filesystems[1293]: Found vda6 Sep 9 00:36:36.823707 extend-filesystems[1293]: Found vda7 Sep 9 00:36:36.823707 extend-filesystems[1293]: Found vda9 Sep 9 00:36:36.823707 extend-filesystems[1293]: Checking size of /dev/vda9 Sep 9 00:36:36.839977 extend-filesystems[1293]: Resized partition /dev/vda9 Sep 9 00:36:36.843840 extend-filesystems[1348]: resize2fs 1.46.5 (30-Dec-2021) Sep 9 00:36:36.851601 update_engine[1306]: I0909 00:36:36.850579 1306 main.cc:92] Flatcar Update Engine starting Sep 9 00:36:36.857580 systemd-logind[1303]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:36:36.858340 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:36:36.858372 systemd[1]: Started update-engine.service. Sep 9 00:36:36.858441 update_engine[1306]: I0909 00:36:36.858397 1306 update_check_scheduler.cc:74] Next update check in 10m19s Sep 9 00:36:36.860281 systemd-logind[1303]: New seat seat0. Sep 9 00:36:36.863125 systemd[1]: Started locksmithd.service. Sep 9 00:36:36.864166 systemd[1]: Started systemd-logind.service. Sep 9 00:36:36.887555 bash[1342]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:36:36.886361 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 9 00:36:36.888967 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:36:36.899875 extend-filesystems[1348]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:36:36.899875 extend-filesystems[1348]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:36:36.899875 extend-filesystems[1348]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:36:36.908500 extend-filesystems[1293]: Resized filesystem in /dev/vda9 Sep 9 00:36:36.904741 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:36:36.909467 env[1317]: time="2025-09-09T00:36:36.900498720Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 9 00:36:36.905002 systemd[1]: Finished extend-filesystems.service. Sep 9 00:36:36.914101 locksmithd[1350]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:36:36.921154 env[1317]: time="2025-09-09T00:36:36.921109880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:36:36.921278 env[1317]: time="2025-09-09T00:36:36.921252880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:36:36.922427 env[1317]: time="2025-09-09T00:36:36.922381680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:36:36.922427 env[1317]: time="2025-09-09T00:36:36.922417560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:36:36.922687 env[1317]: time="2025-09-09T00:36:36.922659600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:36:36.922687 env[1317]: time="2025-09-09T00:36:36.922682120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:36:36.922773 env[1317]: time="2025-09-09T00:36:36.922696520Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 9 00:36:36.922773 env[1317]: time="2025-09-09T00:36:36.922706000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:36:36.922815 env[1317]: time="2025-09-09T00:36:36.922793760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:36:36.923076 env[1317]: time="2025-09-09T00:36:36.923047720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:36:36.923230 env[1317]: time="2025-09-09T00:36:36.923207960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:36:36.923263 env[1317]: time="2025-09-09T00:36:36.923228600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:36:36.923306 env[1317]: time="2025-09-09T00:36:36.923286320Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 9 00:36:36.923306 env[1317]: time="2025-09-09T00:36:36.923303920Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:36:36.926677 env[1317]: time="2025-09-09T00:36:36.926644960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:36:36.926677 env[1317]: time="2025-09-09T00:36:36.926677040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:36:36.926785 env[1317]: time="2025-09-09T00:36:36.926690360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:36:36.926785 env[1317]: time="2025-09-09T00:36:36.926733280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.926785 env[1317]: time="2025-09-09T00:36:36.926749720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.926785 env[1317]: time="2025-09-09T00:36:36.926763120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.926860 env[1317]: time="2025-09-09T00:36:36.926775240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.927184 env[1317]: time="2025-09-09T00:36:36.927158800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.927231 env[1317]: time="2025-09-09T00:36:36.927185600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.927231 env[1317]: time="2025-09-09T00:36:36.927200680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.927231 env[1317]: time="2025-09-09T00:36:36.927213560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.927231 env[1317]: time="2025-09-09T00:36:36.927225680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:36:36.927361 env[1317]: time="2025-09-09T00:36:36.927337760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:36:36.927438 env[1317]: time="2025-09-09T00:36:36.927416880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:36:36.927804 env[1317]: time="2025-09-09T00:36:36.927778920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:36:36.927845 env[1317]: time="2025-09-09T00:36:36.927814720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.927845 env[1317]: time="2025-09-09T00:36:36.927829320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:36:36.927956 env[1317]: time="2025-09-09T00:36:36.927935000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.927989 env[1317]: time="2025-09-09T00:36:36.927962120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.927989 env[1317]: time="2025-09-09T00:36:36.927975360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.927989 env[1317]: time="2025-09-09T00:36:36.927985800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928053 env[1317]: time="2025-09-09T00:36:36.927998640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928053 env[1317]: time="2025-09-09T00:36:36.928012520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928053 env[1317]: time="2025-09-09T00:36:36.928023240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928053 env[1317]: time="2025-09-09T00:36:36.928035280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928053 env[1317]: time="2025-09-09T00:36:36.928048800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:36:36.928184 env[1317]: time="2025-09-09T00:36:36.928168560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928206 env[1317]: time="2025-09-09T00:36:36.928184280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928206 env[1317]: time="2025-09-09T00:36:36.928196960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928244 env[1317]: time="2025-09-09T00:36:36.928208600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:36:36.928244 env[1317]: time="2025-09-09T00:36:36.928222360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 9 00:36:36.928244 env[1317]: time="2025-09-09T00:36:36.928233160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:36:36.928302 env[1317]: time="2025-09-09T00:36:36.928249600Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 9 00:36:36.928302 env[1317]: time="2025-09-09T00:36:36.928281200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:36:36.928518 env[1317]: time="2025-09-09T00:36:36.928462920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:36:36.929114 env[1317]: time="2025-09-09T00:36:36.928526080Z" level=info msg="Connect containerd service" Sep 9 00:36:36.929114 env[1317]: time="2025-09-09T00:36:36.928564760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:36:36.929230 env[1317]: time="2025-09-09T00:36:36.929193080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:36:36.929892 env[1317]: time="2025-09-09T00:36:36.929851640Z" level=info msg="Start subscribing containerd event" Sep 9 00:36:36.929959 env[1317]: time="2025-09-09T00:36:36.929899080Z" level=info msg="Start recovering state" Sep 9 00:36:36.929987 env[1317]: time="2025-09-09T00:36:36.929972360Z" level=info msg="Start event monitor" Sep 9 00:36:36.930008 env[1317]: time="2025-09-09T00:36:36.929995560Z" level=info msg="Start snapshots syncer" Sep 9 00:36:36.930008 env[1317]: time="2025-09-09T00:36:36.930005600Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:36:36.930058 env[1317]: time="2025-09-09T00:36:36.930015720Z" level=info msg="Start streaming server" Sep 9 00:36:36.930225 env[1317]: time="2025-09-09T00:36:36.930199320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:36:36.930263 env[1317]: time="2025-09-09T00:36:36.930252680Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:36:36.930426 systemd[1]: Started containerd.service. Sep 9 00:36:36.931641 env[1317]: time="2025-09-09T00:36:36.931606280Z" level=info msg="containerd successfully booted in 0.035008s" Sep 9 00:36:37.178448 tar[1314]: linux-arm64/LICENSE Sep 9 00:36:37.178652 tar[1314]: linux-arm64/README.md Sep 9 00:36:37.183161 systemd[1]: Finished prepare-helm.service. Sep 9 00:36:37.346180 systemd-networkd[1095]: eth0: Gained IPv6LL Sep 9 00:36:37.347835 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 9 00:36:37.348905 systemd[1]: Reached target network-online.target. Sep 9 00:36:37.351182 systemd[1]: Starting kubelet.service... Sep 9 00:36:37.917203 systemd[1]: Started kubelet.service. Sep 9 00:36:38.168721 sshd_keygen[1325]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:36:38.186330 systemd[1]: Finished sshd-keygen.service. Sep 9 00:36:38.188457 systemd[1]: Starting issuegen.service... Sep 9 00:36:38.193158 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:36:38.193382 systemd[1]: Finished issuegen.service. Sep 9 00:36:38.195578 systemd[1]: Starting systemd-user-sessions.service... Sep 9 00:36:38.201797 systemd[1]: Finished systemd-user-sessions.service. Sep 9 00:36:38.203874 systemd[1]: Started getty@tty1.service. Sep 9 00:36:38.205725 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 9 00:36:38.206742 systemd[1]: Reached target getty.target. Sep 9 00:36:38.207489 systemd[1]: Reached target multi-user.target. Sep 9 00:36:38.209353 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 9 00:36:38.215600 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 9 00:36:38.215795 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 9 00:36:38.217098 systemd[1]: Startup finished in 5.470s (kernel) + 4.775s (userspace) = 10.246s. Sep 9 00:36:38.302498 kubelet[1376]: E0909 00:36:38.302435 1376 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:36:38.304351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:36:38.304485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:36:41.320604 systemd[1]: Created slice system-sshd.slice. Sep 9 00:36:41.321736 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:60368.service. Sep 9 00:36:41.361709 sshd[1402]: Accepted publickey for core from 10.0.0.1 port 60368 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:41.363916 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:41.371500 systemd[1]: Created slice user-500.slice. Sep 9 00:36:41.372458 systemd[1]: Starting user-runtime-dir@500.service... Sep 9 00:36:41.374626 systemd-logind[1303]: New session 1 of user core. Sep 9 00:36:41.380859 systemd[1]: Finished user-runtime-dir@500.service. Sep 9 00:36:41.382076 systemd[1]: Starting user@500.service... Sep 9 00:36:41.384981 (systemd)[1407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:41.444692 systemd[1407]: Queued start job for default target default.target. Sep 9 00:36:41.444936 systemd[1407]: Reached target paths.target. Sep 9 00:36:41.444962 systemd[1407]: Reached target sockets.target. Sep 9 00:36:41.444973 systemd[1407]: Reached target timers.target. Sep 9 00:36:41.444982 systemd[1407]: Reached target basic.target. Sep 9 00:36:41.445026 systemd[1407]: Reached target default.target. Sep 9 00:36:41.445048 systemd[1407]: Startup finished in 54ms. Sep 9 00:36:41.445272 systemd[1]: Started user@500.service. Sep 9 00:36:41.446224 systemd[1]: Started session-1.scope. Sep 9 00:36:41.496720 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:60378.service. Sep 9 00:36:41.534238 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 60378 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:41.535435 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:41.539790 systemd[1]: Started session-2.scope. Sep 9 00:36:41.540163 systemd-logind[1303]: New session 2 of user core. Sep 9 00:36:41.599836 sshd[1416]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:41.602337 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:60388.service. Sep 9 00:36:41.603585 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:60378.service: Deactivated successfully. Sep 9 00:36:41.604278 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:36:41.604571 systemd-logind[1303]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:36:41.605160 systemd-logind[1303]: Removed session 2. Sep 9 00:36:41.638002 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 60388 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:41.639111 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:41.642989 systemd[1]: Started session-3.scope. Sep 9 00:36:41.643181 systemd-logind[1303]: New session 3 of user core. Sep 9 00:36:41.692362 sshd[1421]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:41.694587 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:60400.service. Sep 9 00:36:41.695171 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:60388.service: Deactivated successfully. Sep 9 00:36:41.695820 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:36:41.696517 systemd-logind[1303]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:36:41.697259 systemd-logind[1303]: Removed session 3. Sep 9 00:36:41.730442 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 60400 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:41.731810 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:41.735530 systemd[1]: Started session-4.scope. Sep 9 00:36:41.735713 systemd-logind[1303]: New session 4 of user core. Sep 9 00:36:41.787033 sshd[1428]: pam_unix(sshd:session): session closed for user core Sep 9 00:36:41.789574 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:60408.service. Sep 9 00:36:41.790029 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:60400.service: Deactivated successfully. Sep 9 00:36:41.790906 systemd-logind[1303]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:36:41.790971 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:36:41.791833 systemd-logind[1303]: Removed session 4. Sep 9 00:36:41.826534 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 60408 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:36:41.827658 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:36:41.830670 systemd-logind[1303]: New session 5 of user core. Sep 9 00:36:41.831456 systemd[1]: Started session-5.scope. Sep 9 00:36:41.886036 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:36:41.886242 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:36:41.924063 systemd[1]: Starting docker.service... Sep 9 00:36:41.980515 env[1453]: time="2025-09-09T00:36:41.980461429Z" level=info msg="Starting up" Sep 9 00:36:41.982612 env[1453]: time="2025-09-09T00:36:41.982581630Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:36:41.982705 env[1453]: time="2025-09-09T00:36:41.982691451Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:36:41.982778 env[1453]: time="2025-09-09T00:36:41.982758505Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:36:41.982830 env[1453]: time="2025-09-09T00:36:41.982817489Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:36:41.989153 env[1453]: time="2025-09-09T00:36:41.989126555Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:36:41.989153 env[1453]: time="2025-09-09T00:36:41.989150414Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:36:41.989252 env[1453]: time="2025-09-09T00:36:41.989166397Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:36:41.989252 env[1453]: time="2025-09-09T00:36:41.989177703Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:36:41.993967 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4108283276-merged.mount: Deactivated successfully. Sep 9 00:36:42.183544 env[1453]: time="2025-09-09T00:36:42.183449045Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 9 00:36:42.183544 env[1453]: time="2025-09-09T00:36:42.183475759Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 9 00:36:42.183739 env[1453]: time="2025-09-09T00:36:42.183649103Z" level=info msg="Loading containers: start." Sep 9 00:36:42.297967 kernel: Initializing XFRM netlink socket Sep 9 00:36:42.321057 env[1453]: time="2025-09-09T00:36:42.321013014Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 9 00:36:42.380323 systemd-networkd[1095]: docker0: Link UP Sep 9 00:36:42.404424 env[1453]: time="2025-09-09T00:36:42.404378519Z" level=info msg="Loading containers: done." Sep 9 00:36:42.419405 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2659690908-merged.mount: Deactivated successfully. Sep 9 00:36:42.421389 env[1453]: time="2025-09-09T00:36:42.421352734Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:36:42.421542 env[1453]: time="2025-09-09T00:36:42.421526196Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 9 00:36:42.421638 env[1453]: time="2025-09-09T00:36:42.421622841Z" level=info msg="Daemon has completed initialization" Sep 9 00:36:42.436197 systemd[1]: Started docker.service. Sep 9 00:36:42.444211 env[1453]: time="2025-09-09T00:36:42.444159865Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:36:42.995751 env[1317]: time="2025-09-09T00:36:42.995680230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:36:43.601629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861144976.mount: Deactivated successfully. Sep 9 00:36:44.879162 env[1317]: time="2025-09-09T00:36:44.879118185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:44.880803 env[1317]: time="2025-09-09T00:36:44.880756573Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:44.883075 env[1317]: time="2025-09-09T00:36:44.883049915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:44.885378 env[1317]: time="2025-09-09T00:36:44.885336376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:44.886101 env[1317]: time="2025-09-09T00:36:44.886076852Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 00:36:44.887347 env[1317]: time="2025-09-09T00:36:44.887323572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:36:46.181609 env[1317]: time="2025-09-09T00:36:46.181555010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:46.184597 env[1317]: time="2025-09-09T00:36:46.184557775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:46.195426 env[1317]: time="2025-09-09T00:36:46.195386071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:46.198353 env[1317]: time="2025-09-09T00:36:46.198326261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:46.199084 env[1317]: time="2025-09-09T00:36:46.199050708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 00:36:46.200375 env[1317]: time="2025-09-09T00:36:46.200335447Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:36:47.424469 env[1317]: time="2025-09-09T00:36:47.424413453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:47.426006 env[1317]: time="2025-09-09T00:36:47.425973404Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:47.427652 env[1317]: time="2025-09-09T00:36:47.427603744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:47.430217 env[1317]: time="2025-09-09T00:36:47.430179355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:47.431017 env[1317]: time="2025-09-09T00:36:47.430968782Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 00:36:47.431486 env[1317]: time="2025-09-09T00:36:47.431461744Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:36:48.514440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621489273.mount: Deactivated successfully. Sep 9 00:36:48.515709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:36:48.515825 systemd[1]: Stopped kubelet.service. Sep 9 00:36:48.518034 systemd[1]: Starting kubelet.service... Sep 9 00:36:48.639051 systemd[1]: Started kubelet.service. Sep 9 00:36:48.686990 kubelet[1591]: E0909 00:36:48.686929 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:36:48.689569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:36:48.689711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:36:49.172818 env[1317]: time="2025-09-09T00:36:49.172723679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:49.176138 env[1317]: time="2025-09-09T00:36:49.176086105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:49.178937 env[1317]: time="2025-09-09T00:36:49.178892343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:49.182122 env[1317]: time="2025-09-09T00:36:49.182007900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:49.182418 env[1317]: time="2025-09-09T00:36:49.182223881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 00:36:49.183957 env[1317]: time="2025-09-09T00:36:49.183902120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:36:49.715882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573320093.mount: Deactivated successfully. Sep 9 00:36:50.748354 env[1317]: time="2025-09-09T00:36:50.748297869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:50.749932 env[1317]: time="2025-09-09T00:36:50.749904121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:50.754337 env[1317]: time="2025-09-09T00:36:50.754307073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:50.755222 env[1317]: time="2025-09-09T00:36:50.755184805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:36:50.756394 env[1317]: time="2025-09-09T00:36:50.756360249Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:36:50.756507 env[1317]: time="2025-09-09T00:36:50.756475126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:51.215830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001947657.mount: Deactivated successfully. Sep 9 00:36:51.221532 env[1317]: time="2025-09-09T00:36:51.221448002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:51.224383 env[1317]: time="2025-09-09T00:36:51.224344934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:51.226046 env[1317]: time="2025-09-09T00:36:51.226000012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:51.227902 env[1317]: time="2025-09-09T00:36:51.227843027Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:51.228428 env[1317]: time="2025-09-09T00:36:51.228385581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:36:51.228940 env[1317]: time="2025-09-09T00:36:51.228915579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:36:51.842213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824940219.mount: Deactivated successfully. Sep 9 00:36:53.863509 env[1317]: time="2025-09-09T00:36:53.863449498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:53.870054 env[1317]: time="2025-09-09T00:36:53.870003828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:53.872235 env[1317]: time="2025-09-09T00:36:53.872198275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:53.874097 env[1317]: time="2025-09-09T00:36:53.874068826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:36:53.875270 env[1317]: time="2025-09-09T00:36:53.875238696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 00:36:58.837466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:36:58.837639 systemd[1]: Stopped kubelet.service. Sep 9 00:36:58.839102 systemd[1]: Starting kubelet.service... Sep 9 00:36:58.964483 systemd[1]: Started kubelet.service. Sep 9 00:36:59.018918 kubelet[1628]: E0909 00:36:59.018864 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:36:59.021306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:36:59.021477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:36:59.036642 systemd[1]: Stopped kubelet.service. Sep 9 00:36:59.038904 systemd[1]: Starting kubelet.service... Sep 9 00:36:59.068126 systemd[1]: Reloading. Sep 9 00:36:59.133047 /usr/lib/systemd/system-generators/torcx-generator[1665]: time="2025-09-09T00:36:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:36:59.133068 /usr/lib/systemd/system-generators/torcx-generator[1665]: time="2025-09-09T00:36:59Z" level=info msg="torcx already run" Sep 9 00:36:59.499297 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:36:59.499318 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:36:59.515693 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:36:59.583004 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:36:59.583074 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:36:59.583399 systemd[1]: Stopped kubelet.service. Sep 9 00:36:59.585288 systemd[1]: Starting kubelet.service... Sep 9 00:36:59.681095 systemd[1]: Started kubelet.service. Sep 9 00:36:59.720654 kubelet[1722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:59.720654 kubelet[1722]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:36:59.720654 kubelet[1722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:36:59.722063 kubelet[1722]: I0909 00:36:59.721989 1722 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:37:00.551744 kubelet[1722]: I0909 00:37:00.551706 1722 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:37:00.551894 kubelet[1722]: I0909 00:37:00.551883 1722 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:37:00.552235 kubelet[1722]: I0909 00:37:00.552217 1722 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:37:00.583177 kubelet[1722]: E0909 00:37:00.583134 1722 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:00.584511 kubelet[1722]: I0909 00:37:00.584472 1722 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:37:00.593101 kubelet[1722]: E0909 00:37:00.593051 1722 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:37:00.593101 kubelet[1722]: I0909 00:37:00.593091 1722 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:37:00.597065 kubelet[1722]: I0909 00:37:00.597028 1722 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:37:00.597583 kubelet[1722]: I0909 00:37:00.597553 1722 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:37:00.597697 kubelet[1722]: I0909 00:37:00.597662 1722 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:37:00.597874 kubelet[1722]: I0909 00:37:00.597690 1722 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:37:00.598017 kubelet[1722]: I0909 00:37:00.598006 1722 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:37:00.598017 kubelet[1722]: I0909 00:37:00.598019 1722 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:37:00.598204 kubelet[1722]: I0909 00:37:00.598181 1722 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:37:00.601872 kubelet[1722]: W0909 00:37:00.601799 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:00.601964 kubelet[1722]: E0909 00:37:00.601878 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:00.602160 kubelet[1722]: I0909 00:37:00.602141 1722 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:37:00.602195 kubelet[1722]: I0909 00:37:00.602173 1722 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:37:00.602227 kubelet[1722]: I0909 00:37:00.602200 1722 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:37:00.602306 kubelet[1722]: I0909 00:37:00.602279 1722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:37:00.602871 kubelet[1722]: W0909 00:37:00.602833 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:00.603000 kubelet[1722]: E0909 00:37:00.602978 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:00.608701 kubelet[1722]: I0909 00:37:00.608661 1722 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:37:00.609433 kubelet[1722]: I0909 00:37:00.609403 1722 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:37:00.609609 kubelet[1722]: W0909 00:37:00.609598 1722 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:37:00.610566 kubelet[1722]: I0909 00:37:00.610542 1722 server.go:1274] "Started kubelet" Sep 9 00:37:00.610920 kubelet[1722]: I0909 00:37:00.610866 1722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:37:00.611302 kubelet[1722]: I0909 00:37:00.611285 1722 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:37:00.611429 kubelet[1722]: I0909 00:37:00.611409 1722 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:37:00.612501 kubelet[1722]: I0909 00:37:00.612481 1722 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:37:00.612599 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 9 00:37:00.612718 kubelet[1722]: I0909 00:37:00.612689 1722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:37:00.615306 kubelet[1722]: I0909 00:37:00.615282 1722 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:37:00.620290 kubelet[1722]: I0909 00:37:00.620181 1722 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:37:00.620492 kubelet[1722]: E0909 00:37:00.620463 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:00.620541 kubelet[1722]: I0909 00:37:00.620513 1722 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:37:00.620656 kubelet[1722]: I0909 00:37:00.620638 1722 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:37:00.621382 kubelet[1722]: W0909 00:37:00.621290 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:00.621382 kubelet[1722]: E0909 00:37:00.621345 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:00.622769 kubelet[1722]: E0909 00:37:00.621740 1722 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863762e3f7f6763 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:37:00.610512739 +0000 UTC m=+0.926097897,LastTimestamp:2025-09-09 00:37:00.610512739 +0000 UTC m=+0.926097897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:37:00.624286 kubelet[1722]: I0909 00:37:00.624252 1722 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:37:00.624352 kubelet[1722]: E0909 00:37:00.624293 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Sep 9 00:37:00.624391 kubelet[1722]: I0909 00:37:00.624361 1722 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:37:00.624858 kubelet[1722]: E0909 00:37:00.624834 1722 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:37:00.626119 kubelet[1722]: I0909 00:37:00.626088 1722 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:37:00.632409 kubelet[1722]: I0909 00:37:00.632366 1722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:37:00.633419 kubelet[1722]: I0909 00:37:00.633392 1722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:37:00.633419 kubelet[1722]: I0909 00:37:00.633419 1722 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:37:00.633495 kubelet[1722]: I0909 00:37:00.633443 1722 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:37:00.633495 kubelet[1722]: E0909 00:37:00.633488 1722 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:37:00.639472 kubelet[1722]: W0909 00:37:00.639414 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:00.639570 kubelet[1722]: E0909 00:37:00.639480 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:00.644706 kubelet[1722]: I0909 00:37:00.644682 1722 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:37:00.644706 kubelet[1722]: I0909 00:37:00.644711 1722 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:37:00.644808 kubelet[1722]: I0909 00:37:00.644731 1722 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:37:00.720756 kubelet[1722]: E0909 00:37:00.720722 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:00.734030 kubelet[1722]: E0909 00:37:00.733987 1722 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:37:00.821354 kubelet[1722]: E0909 00:37:00.821223 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:00.826115 kubelet[1722]: E0909 00:37:00.826064 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Sep 9 00:37:00.835094 kubelet[1722]: I0909 00:37:00.835061 1722 policy_none.go:49] "None policy: Start" Sep 9 00:37:00.836096 kubelet[1722]: I0909 00:37:00.836009 1722 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:37:00.836096 kubelet[1722]: I0909 00:37:00.836080 1722 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:37:00.868533 kubelet[1722]: I0909 00:37:00.867012 1722 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:37:00.868533 kubelet[1722]: I0909 00:37:00.867205 1722 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:37:00.868533 kubelet[1722]: I0909 00:37:00.867219 1722 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:37:00.868533 kubelet[1722]: I0909 00:37:00.867639 1722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:37:00.870032 kubelet[1722]: E0909 00:37:00.870003 1722 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:37:00.968874 kubelet[1722]: I0909 00:37:00.968841 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:37:00.969415 kubelet[1722]: E0909 00:37:00.969348 1722 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Sep 9 00:37:01.023027 kubelet[1722]: I0909 00:37:01.022732 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51621733d8e937772b8db5544f072d3f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51621733d8e937772b8db5544f072d3f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:37:01.023027 kubelet[1722]: I0909 00:37:01.022785 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51621733d8e937772b8db5544f072d3f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51621733d8e937772b8db5544f072d3f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:37:01.023027 kubelet[1722]: I0909 00:37:01.022806 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:01.023027 kubelet[1722]: I0909 00:37:01.022824 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:01.023027 kubelet[1722]: I0909 00:37:01.022841 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:01.023307 kubelet[1722]: I0909 00:37:01.022856 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:01.023307 kubelet[1722]: I0909 00:37:01.022877 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51621733d8e937772b8db5544f072d3f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51621733d8e937772b8db5544f072d3f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:37:01.023307 kubelet[1722]: I0909 00:37:01.022895 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:01.023307 kubelet[1722]: I0909 00:37:01.022909 1722 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:37:01.172780 kubelet[1722]: I0909 00:37:01.172748 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:37:01.173346 kubelet[1722]: E0909 00:37:01.173313 1722 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Sep 9 00:37:01.227504 kubelet[1722]: E0909 00:37:01.227457 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Sep 9 00:37:01.240823 kubelet[1722]: E0909 00:37:01.240790 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:01.241625 env[1317]: time="2025-09-09T00:37:01.241559402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51621733d8e937772b8db5544f072d3f,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:01.242087 kubelet[1722]: E0909 00:37:01.242066 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:01.242588 kubelet[1722]: E0909 00:37:01.242501 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:01.242664 env[1317]: time="2025-09-09T00:37:01.242496274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:01.242940 env[1317]: time="2025-09-09T00:37:01.242893575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:01.430722 kubelet[1722]: W0909 00:37:01.430596 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:01.431041 kubelet[1722]: E0909 00:37:01.431017 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:01.574938 kubelet[1722]: I0909 00:37:01.574891 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:37:01.575221 kubelet[1722]: E0909 00:37:01.575199 1722 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Sep 9 00:37:01.595025 kubelet[1722]: W0909 00:37:01.594964 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:01.595134 kubelet[1722]: E0909 00:37:01.595027 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:01.708419 kubelet[1722]: W0909 00:37:01.708268 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:01.708419 kubelet[1722]: E0909 00:37:01.708339 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:01.957615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374764164.mount: Deactivated successfully. Sep 9 00:37:01.964404 env[1317]: time="2025-09-09T00:37:01.964306068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.968437 env[1317]: time="2025-09-09T00:37:01.968393357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.969289 env[1317]: time="2025-09-09T00:37:01.969261949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.971248 env[1317]: time="2025-09-09T00:37:01.971210282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.972815 env[1317]: time="2025-09-09T00:37:01.972542698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.974396 env[1317]: time="2025-09-09T00:37:01.974367008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.978091 env[1317]: time="2025-09-09T00:37:01.977635059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.984283 env[1317]: time="2025-09-09T00:37:01.984162376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.987103 env[1317]: time="2025-09-09T00:37:01.987075771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.988695 env[1317]: time="2025-09-09T00:37:01.988384110Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.991021 env[1317]: time="2025-09-09T00:37:01.989684741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:01.991021 env[1317]: time="2025-09-09T00:37:01.990370176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:02.018253 env[1317]: time="2025-09-09T00:37:02.018057796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:02.018253 env[1317]: time="2025-09-09T00:37:02.018095378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:02.018253 env[1317]: time="2025-09-09T00:37:02.018105603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:02.018418 env[1317]: time="2025-09-09T00:37:02.018271547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:02.018418 env[1317]: time="2025-09-09T00:37:02.018332374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:02.018418 env[1317]: time="2025-09-09T00:37:02.018345793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:02.018737 env[1317]: time="2025-09-09T00:37:02.018698770Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bf91ae55f77dcd4886f638b8f7e772818b92beef1a7dc35b647fdcdcade2ee2 pid=1771 runtime=io.containerd.runc.v2 Sep 9 00:37:02.018787 env[1317]: time="2025-09-09T00:37:02.018714865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a5a6b6a1e8f28f5ced8ce53717599d0ad325bd34876141530e490830dd81ea1 pid=1772 runtime=io.containerd.runc.v2 Sep 9 00:37:02.021071 env[1317]: time="2025-09-09T00:37:02.020893352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:02.021071 env[1317]: time="2025-09-09T00:37:02.020928019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:02.021071 env[1317]: time="2025-09-09T00:37:02.020938283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:02.021335 env[1317]: time="2025-09-09T00:37:02.021159423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b852ae7fd3adc7accd473e28b55e2322d84b447e92e00fd7a5683d75a5312c74 pid=1788 runtime=io.containerd.runc.v2 Sep 9 00:37:02.028782 kubelet[1722]: E0909 00:37:02.028707 1722 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Sep 9 00:37:02.075094 env[1317]: time="2025-09-09T00:37:02.075045135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bf91ae55f77dcd4886f638b8f7e772818b92beef1a7dc35b647fdcdcade2ee2\"" Sep 9 00:37:02.075778 env[1317]: time="2025-09-09T00:37:02.075719138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b852ae7fd3adc7accd473e28b55e2322d84b447e92e00fd7a5683d75a5312c74\"" Sep 9 00:37:02.076447 kubelet[1722]: E0909 00:37:02.076166 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:02.078418 kubelet[1722]: E0909 00:37:02.076824 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:02.078536 env[1317]: time="2025-09-09T00:37:02.078501336Z" level=info msg="CreateContainer within sandbox \"2bf91ae55f77dcd4886f638b8f7e772818b92beef1a7dc35b647fdcdcade2ee2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:37:02.078750 env[1317]: time="2025-09-09T00:37:02.078726549Z" level=info msg="CreateContainer within sandbox \"b852ae7fd3adc7accd473e28b55e2322d84b447e92e00fd7a5683d75a5312c74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:37:02.089312 env[1317]: time="2025-09-09T00:37:02.089266569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51621733d8e937772b8db5544f072d3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a5a6b6a1e8f28f5ced8ce53717599d0ad325bd34876141530e490830dd81ea1\"" Sep 9 00:37:02.090074 kubelet[1722]: E0909 00:37:02.089910 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:02.091613 env[1317]: time="2025-09-09T00:37:02.091582844Z" level=info msg="CreateContainer within sandbox \"6a5a6b6a1e8f28f5ced8ce53717599d0ad325bd34876141530e490830dd81ea1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:37:02.098407 env[1317]: time="2025-09-09T00:37:02.098361212Z" level=info msg="CreateContainer within sandbox \"b852ae7fd3adc7accd473e28b55e2322d84b447e92e00fd7a5683d75a5312c74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5618a272720d3928d4ca695193bb4987ff5fb8456a888a867c345be0b5131fa5\"" Sep 9 00:37:02.098959 env[1317]: time="2025-09-09T00:37:02.098919793Z" level=info msg="StartContainer for \"5618a272720d3928d4ca695193bb4987ff5fb8456a888a867c345be0b5131fa5\"" Sep 9 00:37:02.099493 env[1317]: time="2025-09-09T00:37:02.099452453Z" level=info msg="CreateContainer within sandbox \"2bf91ae55f77dcd4886f638b8f7e772818b92beef1a7dc35b647fdcdcade2ee2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e9052eb11457f544044bf495e587e850c913508cb2d403641a9549a29d5043f3\"" Sep 9 00:37:02.099974 env[1317]: time="2025-09-09T00:37:02.099937666Z" level=info msg="StartContainer for \"e9052eb11457f544044bf495e587e850c913508cb2d403641a9549a29d5043f3\"" Sep 9 00:37:02.107171 env[1317]: time="2025-09-09T00:37:02.107121570Z" level=info msg="CreateContainer within sandbox \"6a5a6b6a1e8f28f5ced8ce53717599d0ad325bd34876141530e490830dd81ea1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1f867142ed5bf23f9ef32600ba2e7d08e571bc12108e8847ff2d1bc6330198af\"" Sep 9 00:37:02.107691 env[1317]: time="2025-09-09T00:37:02.107660900Z" level=info msg="StartContainer for \"1f867142ed5bf23f9ef32600ba2e7d08e571bc12108e8847ff2d1bc6330198af\"" Sep 9 00:37:02.172986 env[1317]: time="2025-09-09T00:37:02.172843148Z" level=info msg="StartContainer for \"5618a272720d3928d4ca695193bb4987ff5fb8456a888a867c345be0b5131fa5\" returns successfully" Sep 9 00:37:02.186307 env[1317]: time="2025-09-09T00:37:02.186229147Z" level=info msg="StartContainer for \"1f867142ed5bf23f9ef32600ba2e7d08e571bc12108e8847ff2d1bc6330198af\" returns successfully" Sep 9 00:37:02.188149 env[1317]: time="2025-09-09T00:37:02.188104062Z" level=info msg="StartContainer for \"e9052eb11457f544044bf495e587e850c913508cb2d403641a9549a29d5043f3\" returns successfully" Sep 9 00:37:02.240489 kubelet[1722]: W0909 00:37:02.240354 1722 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 9 00:37:02.240679 kubelet[1722]: E0909 00:37:02.240653 1722 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:37:02.377326 kubelet[1722]: I0909 00:37:02.377299 1722 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:37:02.645911 kubelet[1722]: E0909 00:37:02.645884 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:02.648091 kubelet[1722]: E0909 00:37:02.648067 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:02.650007 kubelet[1722]: E0909 00:37:02.649985 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:03.651507 kubelet[1722]: E0909 00:37:03.651475 1722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:03.776368 kubelet[1722]: E0909 00:37:03.776279 1722 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:37:03.889654 kubelet[1722]: I0909 00:37:03.889621 1722 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:37:03.889827 kubelet[1722]: E0909 00:37:03.889814 1722 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:37:03.902909 kubelet[1722]: E0909 00:37:03.902608 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:04.003621 kubelet[1722]: E0909 00:37:04.003580 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:04.104454 kubelet[1722]: E0909 00:37:04.104413 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:04.205331 kubelet[1722]: E0909 00:37:04.205222 1722 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:04.604792 kubelet[1722]: I0909 00:37:04.604757 1722 apiserver.go:52] "Watching apiserver" Sep 9 00:37:04.620924 kubelet[1722]: I0909 00:37:04.620889 1722 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:37:05.640313 systemd[1]: Reloading. Sep 9 00:37:05.684469 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2025-09-09T00:37:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:37:05.684499 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2025-09-09T00:37:05Z" level=info msg="torcx already run" Sep 9 00:37:05.752697 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:37:05.752719 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:37:05.770029 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:37:05.858511 systemd[1]: Stopping kubelet.service... Sep 9 00:37:05.878475 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:37:05.878763 systemd[1]: Stopped kubelet.service. Sep 9 00:37:05.880574 systemd[1]: Starting kubelet.service... Sep 9 00:37:05.977665 systemd[1]: Started kubelet.service. Sep 9 00:37:06.018618 kubelet[2077]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:37:06.018618 kubelet[2077]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:37:06.018618 kubelet[2077]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:37:06.019147 kubelet[2077]: I0909 00:37:06.018658 2077 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:37:06.027581 kubelet[2077]: I0909 00:37:06.027530 2077 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:37:06.027581 kubelet[2077]: I0909 00:37:06.027572 2077 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:37:06.028145 kubelet[2077]: I0909 00:37:06.028127 2077 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:37:06.030287 kubelet[2077]: I0909 00:37:06.030268 2077 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:37:06.033103 kubelet[2077]: I0909 00:37:06.033075 2077 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:37:06.036774 kubelet[2077]: E0909 00:37:06.036743 2077 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:37:06.036904 kubelet[2077]: I0909 00:37:06.036891 2077 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:37:06.039336 kubelet[2077]: I0909 00:37:06.039312 2077 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:37:06.039750 kubelet[2077]: I0909 00:37:06.039733 2077 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:37:06.039940 kubelet[2077]: I0909 00:37:06.039912 2077 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:37:06.040425 kubelet[2077]: I0909 00:37:06.040253 2077 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:37:06.040602 kubelet[2077]: I0909 00:37:06.040585 2077 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:37:06.040679 kubelet[2077]: I0909 00:37:06.040670 2077 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:37:06.040788 kubelet[2077]: I0909 00:37:06.040776 2077 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:37:06.041068 kubelet[2077]: I0909 00:37:06.041052 2077 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:37:06.041163 kubelet[2077]: I0909 00:37:06.041151 2077 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:37:06.041435 kubelet[2077]: I0909 00:37:06.041412 2077 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:37:06.042207 kubelet[2077]: I0909 00:37:06.042190 2077 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:37:06.047133 kubelet[2077]: I0909 00:37:06.047101 2077 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:37:06.052548 kubelet[2077]: I0909 00:37:06.052520 2077 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:37:06.053460 kubelet[2077]: I0909 00:37:06.053436 2077 server.go:1274] "Started kubelet" Sep 9 00:37:06.056324 kubelet[2077]: I0909 00:37:06.055453 2077 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:37:06.056929 kubelet[2077]: I0909 00:37:06.056896 2077 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:37:06.059162 kubelet[2077]: I0909 00:37:06.056868 2077 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:37:06.059348 kubelet[2077]: I0909 00:37:06.059281 2077 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:37:06.060362 kubelet[2077]: I0909 00:37:06.060337 2077 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:37:06.060804 kubelet[2077]: I0909 00:37:06.060757 2077 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:37:06.064014 kubelet[2077]: I0909 00:37:06.063056 2077 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:37:06.064014 kubelet[2077]: I0909 00:37:06.063204 2077 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:37:06.064014 kubelet[2077]: I0909 00:37:06.063375 2077 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:37:06.065690 kubelet[2077]: I0909 00:37:06.064475 2077 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:37:06.065690 kubelet[2077]: I0909 00:37:06.064727 2077 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:37:06.066087 kubelet[2077]: E0909 00:37:06.066060 2077 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:37:06.066324 kubelet[2077]: E0909 00:37:06.066289 2077 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:37:06.072865 kubelet[2077]: I0909 00:37:06.072703 2077 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:37:06.084246 kubelet[2077]: I0909 00:37:06.084193 2077 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:37:06.087942 kubelet[2077]: I0909 00:37:06.087907 2077 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:37:06.088053 kubelet[2077]: I0909 00:37:06.087939 2077 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:37:06.088053 kubelet[2077]: I0909 00:37:06.087984 2077 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:37:06.088053 kubelet[2077]: E0909 00:37:06.088042 2077 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:37:06.112679 kubelet[2077]: I0909 00:37:06.112624 2077 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:37:06.112679 kubelet[2077]: I0909 00:37:06.112654 2077 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:37:06.112679 kubelet[2077]: I0909 00:37:06.112679 2077 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:37:06.112868 kubelet[2077]: I0909 00:37:06.112851 2077 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:37:06.112917 kubelet[2077]: I0909 00:37:06.112867 2077 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:37:06.112917 kubelet[2077]: I0909 00:37:06.112885 2077 policy_none.go:49] "None policy: Start" Sep 9 00:37:06.113569 kubelet[2077]: I0909 00:37:06.113530 2077 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:37:06.113569 kubelet[2077]: I0909 00:37:06.113567 2077 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:37:06.113742 kubelet[2077]: I0909 00:37:06.113725 2077 state_mem.go:75] "Updated machine memory state" Sep 9 00:37:06.118124 kubelet[2077]: I0909 00:37:06.118103 2077 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:37:06.119371 kubelet[2077]: I0909 00:37:06.119349 2077 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:37:06.119573 kubelet[2077]: I0909 00:37:06.119504 2077 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:37:06.120599 kubelet[2077]: I0909 00:37:06.120425 2077 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:37:06.224659 kubelet[2077]: I0909 00:37:06.224627 2077 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:37:06.231356 kubelet[2077]: I0909 00:37:06.231270 2077 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:37:06.231356 kubelet[2077]: I0909 00:37:06.231342 2077 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:37:06.264811 kubelet[2077]: I0909 00:37:06.264765 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:06.265035 kubelet[2077]: I0909 00:37:06.265015 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:06.265143 kubelet[2077]: I0909 00:37:06.265128 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:37:06.265240 kubelet[2077]: I0909 00:37:06.265226 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51621733d8e937772b8db5544f072d3f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51621733d8e937772b8db5544f072d3f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:37:06.265321 kubelet[2077]: I0909 00:37:06.265308 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51621733d8e937772b8db5544f072d3f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51621733d8e937772b8db5544f072d3f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:37:06.265397 kubelet[2077]: I0909 00:37:06.265381 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51621733d8e937772b8db5544f072d3f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51621733d8e937772b8db5544f072d3f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:37:06.265521 kubelet[2077]: I0909 00:37:06.265507 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:06.265597 kubelet[2077]: I0909 00:37:06.265585 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:06.265734 kubelet[2077]: I0909 00:37:06.265690 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:06.496777 kubelet[2077]: E0909 00:37:06.496664 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:06.497289 kubelet[2077]: E0909 00:37:06.497260 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:06.497694 kubelet[2077]: E0909 00:37:06.497671 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:06.635434 sudo[2112]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:37:06.635669 sudo[2112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 9 00:37:07.043133 kubelet[2077]: I0909 00:37:07.043081 2077 apiserver.go:52] "Watching apiserver" Sep 9 00:37:07.063963 kubelet[2077]: I0909 00:37:07.063905 2077 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:37:07.080460 sudo[2112]: pam_unix(sudo:session): session closed for user root Sep 9 00:37:07.097716 kubelet[2077]: E0909 00:37:07.097672 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:07.106995 kubelet[2077]: E0909 00:37:07.104747 2077 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:37:07.106995 kubelet[2077]: E0909 00:37:07.104910 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:07.106995 kubelet[2077]: E0909 00:37:07.105222 2077 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:37:07.106995 kubelet[2077]: E0909 00:37:07.105326 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:07.118475 kubelet[2077]: I0909 00:37:07.118422 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.118409154 podStartE2EDuration="1.118409154s" podCreationTimestamp="2025-09-09 00:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:07.117112177 +0000 UTC m=+1.134500866" watchObservedRunningTime="2025-09-09 00:37:07.118409154 +0000 UTC m=+1.135797803" Sep 9 00:37:07.124906 kubelet[2077]: I0909 00:37:07.124778 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.124764981 podStartE2EDuration="1.124764981s" podCreationTimestamp="2025-09-09 00:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:07.124622653 +0000 UTC m=+1.142011302" watchObservedRunningTime="2025-09-09 00:37:07.124764981 +0000 UTC m=+1.142153630" Sep 9 00:37:07.139401 kubelet[2077]: I0909 00:37:07.139349 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.139330052 podStartE2EDuration="1.139330052s" podCreationTimestamp="2025-09-09 00:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:07.131047865 +0000 UTC m=+1.148436554" watchObservedRunningTime="2025-09-09 00:37:07.139330052 +0000 UTC m=+1.156718701" Sep 9 00:37:08.099577 kubelet[2077]: E0909 00:37:08.099526 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:08.099577 kubelet[2077]: E0909 00:37:08.099537 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:08.905355 sudo[1441]: pam_unix(sudo:session): session closed for user root Sep 9 00:37:08.907209 sshd[1435]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:08.910114 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:60408.service: Deactivated successfully. Sep 9 00:37:08.911347 systemd-logind[1303]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:37:08.911398 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:37:08.912379 systemd-logind[1303]: Removed session 5. Sep 9 00:37:09.003440 kubelet[2077]: E0909 00:37:09.003393 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:10.119336 kubelet[2077]: I0909 00:37:10.119309 2077 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:37:10.120128 env[1317]: time="2025-09-09T00:37:10.120033091Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:37:10.120361 kubelet[2077]: I0909 00:37:10.120242 2077 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:37:10.899085 kubelet[2077]: I0909 00:37:10.899029 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-xtables-lock\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899085 kubelet[2077]: I0909 00:37:10.899075 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-kernel\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899272 kubelet[2077]: I0909 00:37:10.899113 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de74ba48-5d7d-48e6-bcf0-781d91ecc106-xtables-lock\") pod \"kube-proxy-zf5bh\" (UID: \"de74ba48-5d7d-48e6-bcf0-781d91ecc106\") " pod="kube-system/kube-proxy-zf5bh" Sep 9 00:37:10.899272 kubelet[2077]: I0909 00:37:10.899131 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/890d621f-4bd4-4cfb-86e8-25283278fd27-clustermesh-secrets\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899272 kubelet[2077]: I0909 00:37:10.899150 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de74ba48-5d7d-48e6-bcf0-781d91ecc106-lib-modules\") pod \"kube-proxy-zf5bh\" (UID: \"de74ba48-5d7d-48e6-bcf0-781d91ecc106\") " pod="kube-system/kube-proxy-zf5bh" Sep 9 00:37:10.899272 kubelet[2077]: I0909 00:37:10.899177 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-cgroup\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899272 kubelet[2077]: I0909 00:37:10.899196 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tkt\" (UniqueName: \"kubernetes.io/projected/de74ba48-5d7d-48e6-bcf0-781d91ecc106-kube-api-access-84tkt\") pod \"kube-proxy-zf5bh\" (UID: \"de74ba48-5d7d-48e6-bcf0-781d91ecc106\") " pod="kube-system/kube-proxy-zf5bh" Sep 9 00:37:10.899405 kubelet[2077]: I0909 00:37:10.899214 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-hostproc\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899405 kubelet[2077]: I0909 00:37:10.899232 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-net\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899405 kubelet[2077]: I0909 00:37:10.899257 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-config-path\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899405 kubelet[2077]: I0909 00:37:10.899273 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-run\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899405 kubelet[2077]: I0909 00:37:10.899288 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-bpf-maps\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899405 kubelet[2077]: I0909 00:37:10.899303 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-etc-cni-netd\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899527 kubelet[2077]: I0909 00:37:10.899325 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-lib-modules\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899527 kubelet[2077]: I0909 00:37:10.899373 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-hubble-tls\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899527 kubelet[2077]: I0909 00:37:10.899415 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7wz\" (UniqueName: \"kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-kube-api-access-jg7wz\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:10.899527 kubelet[2077]: I0909 00:37:10.899447 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de74ba48-5d7d-48e6-bcf0-781d91ecc106-kube-proxy\") pod \"kube-proxy-zf5bh\" (UID: \"de74ba48-5d7d-48e6-bcf0-781d91ecc106\") " pod="kube-system/kube-proxy-zf5bh" Sep 9 00:37:10.899527 kubelet[2077]: I0909 00:37:10.899463 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cni-path\") pod \"cilium-bbngp\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " pod="kube-system/cilium-bbngp" Sep 9 00:37:11.000798 kubelet[2077]: I0909 00:37:11.000751 2077 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 9 00:37:11.100041 kubelet[2077]: E0909 00:37:11.099992 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.100629 env[1317]: time="2025-09-09T00:37:11.100568284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zf5bh,Uid:de74ba48-5d7d-48e6-bcf0-781d91ecc106,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:11.102917 kubelet[2077]: E0909 00:37:11.102394 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.103523 env[1317]: time="2025-09-09T00:37:11.103350587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbngp,Uid:890d621f-4bd4-4cfb-86e8-25283278fd27,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:11.120453 env[1317]: time="2025-09-09T00:37:11.120371998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:11.120453 env[1317]: time="2025-09-09T00:37:11.120417768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:11.120453 env[1317]: time="2025-09-09T00:37:11.120438493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:11.120744 env[1317]: time="2025-09-09T00:37:11.120566082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b pid=2179 runtime=io.containerd.runc.v2 Sep 9 00:37:11.121017 env[1317]: time="2025-09-09T00:37:11.120870670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:11.121017 env[1317]: time="2025-09-09T00:37:11.120903797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:11.121017 env[1317]: time="2025-09-09T00:37:11.120913519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:11.121331 env[1317]: time="2025-09-09T00:37:11.121245714Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc4eeb5e3ce9e9e90dd6acb550db2e14c79c0c10394910ded60e320513d0d3a2 pid=2180 runtime=io.containerd.runc.v2 Sep 9 00:37:11.158692 env[1317]: time="2025-09-09T00:37:11.158576713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbngp,Uid:890d621f-4bd4-4cfb-86e8-25283278fd27,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\"" Sep 9 00:37:11.160018 kubelet[2077]: E0909 00:37:11.159995 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.161652 env[1317]: time="2025-09-09T00:37:11.161605911Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:37:11.171446 env[1317]: time="2025-09-09T00:37:11.171400745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zf5bh,Uid:de74ba48-5d7d-48e6-bcf0-781d91ecc106,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc4eeb5e3ce9e9e90dd6acb550db2e14c79c0c10394910ded60e320513d0d3a2\"" Sep 9 00:37:11.172192 kubelet[2077]: E0909 00:37:11.172006 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.175010 env[1317]: time="2025-09-09T00:37:11.174978826Z" level=info msg="CreateContainer within sandbox \"bc4eeb5e3ce9e9e90dd6acb550db2e14c79c0c10394910ded60e320513d0d3a2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:37:11.186503 env[1317]: time="2025-09-09T00:37:11.186437552Z" level=info msg="CreateContainer within sandbox \"bc4eeb5e3ce9e9e90dd6acb550db2e14c79c0c10394910ded60e320513d0d3a2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d749c1005bef37a1550e6a03fe40255ac0f4fef267e82b331022e103f97749d\"" Sep 9 00:37:11.187825 env[1317]: time="2025-09-09T00:37:11.187060691Z" level=info msg="StartContainer for \"9d749c1005bef37a1550e6a03fe40255ac0f4fef267e82b331022e103f97749d\"" Sep 9 00:37:11.265368 env[1317]: time="2025-09-09T00:37:11.265279966Z" level=info msg="StartContainer for \"9d749c1005bef37a1550e6a03fe40255ac0f4fef267e82b331022e103f97749d\" returns successfully" Sep 9 00:37:11.302102 kubelet[2077]: I0909 00:37:11.302045 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef734df2-8351-4413-8d47-fd4e3df1cd89-cilium-config-path\") pod \"cilium-operator-5d85765b45-tklxk\" (UID: \"ef734df2-8351-4413-8d47-fd4e3df1cd89\") " pod="kube-system/cilium-operator-5d85765b45-tklxk" Sep 9 00:37:11.302225 kubelet[2077]: I0909 00:37:11.302132 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxnp8\" (UniqueName: \"kubernetes.io/projected/ef734df2-8351-4413-8d47-fd4e3df1cd89-kube-api-access-wxnp8\") pod \"cilium-operator-5d85765b45-tklxk\" (UID: \"ef734df2-8351-4413-8d47-fd4e3df1cd89\") " pod="kube-system/cilium-operator-5d85765b45-tklxk" Sep 9 00:37:11.543496 kubelet[2077]: E0909 00:37:11.543384 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:11.544348 env[1317]: time="2025-09-09T00:37:11.544307126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tklxk,Uid:ef734df2-8351-4413-8d47-fd4e3df1cd89,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:11.592187 env[1317]: time="2025-09-09T00:37:11.592034894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:11.592187 env[1317]: time="2025-09-09T00:37:11.592084585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:11.592495 env[1317]: time="2025-09-09T00:37:11.592446106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:11.592877 env[1317]: time="2025-09-09T00:37:11.592834473Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3 pid=2375 runtime=io.containerd.runc.v2 Sep 9 00:37:11.642268 env[1317]: time="2025-09-09T00:37:11.642208368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tklxk,Uid:ef734df2-8351-4413-8d47-fd4e3df1cd89,Namespace:kube-system,Attempt:0,} returns sandbox id \"51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3\"" Sep 9 00:37:11.643195 kubelet[2077]: E0909 00:37:11.643159 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:12.109626 kubelet[2077]: E0909 00:37:12.109590 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:12.651499 kubelet[2077]: E0909 00:37:12.651436 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:12.669176 kubelet[2077]: I0909 00:37:12.669119 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zf5bh" podStartSLOduration=2.669100287 podStartE2EDuration="2.669100287s" podCreationTimestamp="2025-09-09 00:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:12.140639787 +0000 UTC m=+6.158028436" watchObservedRunningTime="2025-09-09 00:37:12.669100287 +0000 UTC m=+6.686488936" Sep 9 00:37:13.111736 kubelet[2077]: E0909 00:37:13.111679 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:14.939825 kubelet[2077]: E0909 00:37:14.939560 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:15.114152 kubelet[2077]: E0909 00:37:15.114044 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:17.935720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215450284.mount: Deactivated successfully. Sep 9 00:37:19.015352 kubelet[2077]: E0909 00:37:19.014605 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:20.192434 env[1317]: time="2025-09-09T00:37:20.192364476Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:20.194706 env[1317]: time="2025-09-09T00:37:20.194671354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:20.196804 env[1317]: time="2025-09-09T00:37:20.196770682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:20.197370 env[1317]: time="2025-09-09T00:37:20.197332720Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:37:20.200258 env[1317]: time="2025-09-09T00:37:20.200223797Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:37:20.201472 env[1317]: time="2025-09-09T00:37:20.201441885Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:37:20.211261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1008737453.mount: Deactivated successfully. Sep 9 00:37:20.216343 env[1317]: time="2025-09-09T00:37:20.216302249Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\"" Sep 9 00:37:20.217580 env[1317]: time="2025-09-09T00:37:20.217100319Z" level=info msg="StartContainer for \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\"" Sep 9 00:37:20.264691 env[1317]: time="2025-09-09T00:37:20.264649140Z" level=info msg="StartContainer for \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\" returns successfully" Sep 9 00:37:20.431271 env[1317]: time="2025-09-09T00:37:20.431157325Z" level=info msg="shim disconnected" id=21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844 Sep 9 00:37:20.431271 env[1317]: time="2025-09-09T00:37:20.431272781Z" level=warning msg="cleaning up after shim disconnected" id=21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844 namespace=k8s.io Sep 9 00:37:20.431492 env[1317]: time="2025-09-09T00:37:20.431283703Z" level=info msg="cleaning up dead shim" Sep 9 00:37:20.438278 env[1317]: time="2025-09-09T00:37:20.438240060Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2504 runtime=io.containerd.runc.v2\n" Sep 9 00:37:21.125481 kubelet[2077]: E0909 00:37:21.125384 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:21.131038 env[1317]: time="2025-09-09T00:37:21.130991865Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:37:21.146468 env[1317]: time="2025-09-09T00:37:21.146422042Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\"" Sep 9 00:37:21.148344 env[1317]: time="2025-09-09T00:37:21.147925919Z" level=info msg="StartContainer for \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\"" Sep 9 00:37:21.194416 env[1317]: time="2025-09-09T00:37:21.194365549Z" level=info msg="StartContainer for \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\" returns successfully" Sep 9 00:37:21.203197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:37:21.203450 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:37:21.203614 systemd[1]: Stopping systemd-sysctl.service... Sep 9 00:37:21.205156 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:37:21.210151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844-rootfs.mount: Deactivated successfully. Sep 9 00:37:21.214736 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:37:21.225813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6-rootfs.mount: Deactivated successfully. Sep 9 00:37:21.231615 env[1317]: time="2025-09-09T00:37:21.231563291Z" level=info msg="shim disconnected" id=54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6 Sep 9 00:37:21.231615 env[1317]: time="2025-09-09T00:37:21.231609897Z" level=warning msg="cleaning up after shim disconnected" id=54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6 namespace=k8s.io Sep 9 00:37:21.231615 env[1317]: time="2025-09-09T00:37:21.231619218Z" level=info msg="cleaning up dead shim" Sep 9 00:37:21.238033 env[1317]: time="2025-09-09T00:37:21.237992171Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2569 runtime=io.containerd.runc.v2\n" Sep 9 00:37:21.315932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416505051.mount: Deactivated successfully. Sep 9 00:37:21.973675 update_engine[1306]: I0909 00:37:21.973625 1306 update_attempter.cc:509] Updating boot flags... Sep 9 00:37:22.026751 env[1317]: time="2025-09-09T00:37:22.026132146Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:22.027042 env[1317]: time="2025-09-09T00:37:22.026998774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:22.028989 env[1317]: time="2025-09-09T00:37:22.028423111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:37:22.029672 env[1317]: time="2025-09-09T00:37:22.029630861Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:37:22.031807 env[1317]: time="2025-09-09T00:37:22.031776128Z" level=info msg="CreateContainer within sandbox \"51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:37:22.040006 env[1317]: time="2025-09-09T00:37:22.039954344Z" level=info msg="CreateContainer within sandbox \"51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\"" Sep 9 00:37:22.040545 env[1317]: time="2025-09-09T00:37:22.040497572Z" level=info msg="StartContainer for \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\"" Sep 9 00:37:22.097289 env[1317]: time="2025-09-09T00:37:22.097242904Z" level=info msg="StartContainer for \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\" returns successfully" Sep 9 00:37:22.131800 kubelet[2077]: E0909 00:37:22.131770 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:22.136913 kubelet[2077]: E0909 00:37:22.136879 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:22.139328 env[1317]: time="2025-09-09T00:37:22.139247564Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:37:22.146571 kubelet[2077]: I0909 00:37:22.146509 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-tklxk" podStartSLOduration=0.760481851 podStartE2EDuration="11.146494545s" podCreationTimestamp="2025-09-09 00:37:11 +0000 UTC" firstStartedPulling="2025-09-09 00:37:11.64444631 +0000 UTC m=+5.661834919" lastFinishedPulling="2025-09-09 00:37:22.030459004 +0000 UTC m=+16.047847613" observedRunningTime="2025-09-09 00:37:22.146068092 +0000 UTC m=+16.163456741" watchObservedRunningTime="2025-09-09 00:37:22.146494545 +0000 UTC m=+16.163883194" Sep 9 00:37:22.228500 env[1317]: time="2025-09-09T00:37:22.228342557Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\"" Sep 9 00:37:22.229076 env[1317]: time="2025-09-09T00:37:22.229044925Z" level=info msg="StartContainer for \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\"" Sep 9 00:37:22.260521 systemd[1]: run-containerd-runc-k8s.io-ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691-runc.auoVfU.mount: Deactivated successfully. Sep 9 00:37:22.320010 env[1317]: time="2025-09-09T00:37:22.317346419Z" level=info msg="StartContainer for \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\" returns successfully" Sep 9 00:37:22.352126 env[1317]: time="2025-09-09T00:37:22.351914635Z" level=info msg="shim disconnected" id=ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691 Sep 9 00:37:22.352310 env[1317]: time="2025-09-09T00:37:22.352129062Z" level=warning msg="cleaning up after shim disconnected" id=ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691 namespace=k8s.io Sep 9 00:37:22.352310 env[1317]: time="2025-09-09T00:37:22.352143144Z" level=info msg="cleaning up dead shim" Sep 9 00:37:22.359381 env[1317]: time="2025-09-09T00:37:22.359339718Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2679 runtime=io.containerd.runc.v2\n" Sep 9 00:37:23.141373 kubelet[2077]: E0909 00:37:23.141173 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:23.141373 kubelet[2077]: E0909 00:37:23.141179 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:23.143512 env[1317]: time="2025-09-09T00:37:23.143473744Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:37:23.156634 env[1317]: time="2025-09-09T00:37:23.156582414Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\"" Sep 9 00:37:23.157270 env[1317]: time="2025-09-09T00:37:23.157243812Z" level=info msg="StartContainer for \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\"" Sep 9 00:37:23.209184 env[1317]: time="2025-09-09T00:37:23.209137869Z" level=info msg="StartContainer for \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\" returns successfully" Sep 9 00:37:23.209360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691-rootfs.mount: Deactivated successfully. Sep 9 00:37:23.228184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106-rootfs.mount: Deactivated successfully. Sep 9 00:37:23.230514 env[1317]: time="2025-09-09T00:37:23.230474273Z" level=info msg="shim disconnected" id=eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106 Sep 9 00:37:23.230879 env[1317]: time="2025-09-09T00:37:23.230856718Z" level=warning msg="cleaning up after shim disconnected" id=eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106 namespace=k8s.io Sep 9 00:37:23.230973 env[1317]: time="2025-09-09T00:37:23.230957010Z" level=info msg="cleaning up dead shim" Sep 9 00:37:23.238257 env[1317]: time="2025-09-09T00:37:23.238222629Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:37:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2734 runtime=io.containerd.runc.v2\n" Sep 9 00:37:24.147090 kubelet[2077]: E0909 00:37:24.147058 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:24.149298 env[1317]: time="2025-09-09T00:37:24.149250930Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:37:24.178664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663713781.mount: Deactivated successfully. Sep 9 00:37:24.181704 env[1317]: time="2025-09-09T00:37:24.181650058Z" level=info msg="CreateContainer within sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\"" Sep 9 00:37:24.182204 env[1317]: time="2025-09-09T00:37:24.182171957Z" level=info msg="StartContainer for \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\"" Sep 9 00:37:24.240084 env[1317]: time="2025-09-09T00:37:24.240040513Z" level=info msg="StartContainer for \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\" returns successfully" Sep 9 00:37:24.371347 kubelet[2077]: I0909 00:37:24.371308 2077 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:37:24.394526 kubelet[2077]: I0909 00:37:24.393850 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431c0807-ccf2-4266-840e-044e232bb07c-config-volume\") pod \"coredns-7c65d6cfc9-hmw8h\" (UID: \"431c0807-ccf2-4266-840e-044e232bb07c\") " pod="kube-system/coredns-7c65d6cfc9-hmw8h" Sep 9 00:37:24.394526 kubelet[2077]: I0909 00:37:24.393899 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6sh7\" (UniqueName: \"kubernetes.io/projected/431c0807-ccf2-4266-840e-044e232bb07c-kube-api-access-m6sh7\") pod \"coredns-7c65d6cfc9-hmw8h\" (UID: \"431c0807-ccf2-4266-840e-044e232bb07c\") " pod="kube-system/coredns-7c65d6cfc9-hmw8h" Sep 9 00:37:24.394970 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:37:24.494362 kubelet[2077]: I0909 00:37:24.494249 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c53263ac-32e6-4d4b-adf0-2c92345825fd-config-volume\") pod \"coredns-7c65d6cfc9-qlrrx\" (UID: \"c53263ac-32e6-4d4b-adf0-2c92345825fd\") " pod="kube-system/coredns-7c65d6cfc9-qlrrx" Sep 9 00:37:24.494544 kubelet[2077]: I0909 00:37:24.494524 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpz4f\" (UniqueName: \"kubernetes.io/projected/c53263ac-32e6-4d4b-adf0-2c92345825fd-kube-api-access-bpz4f\") pod \"coredns-7c65d6cfc9-qlrrx\" (UID: \"c53263ac-32e6-4d4b-adf0-2c92345825fd\") " pod="kube-system/coredns-7c65d6cfc9-qlrrx" Sep 9 00:37:24.629979 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:37:24.694442 kubelet[2077]: E0909 00:37:24.694399 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:24.695452 env[1317]: time="2025-09-09T00:37:24.695400952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hmw8h,Uid:431c0807-ccf2-4266-840e-044e232bb07c,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:24.700550 kubelet[2077]: E0909 00:37:24.699425 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:24.700654 env[1317]: time="2025-09-09T00:37:24.700120404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qlrrx,Uid:c53263ac-32e6-4d4b-adf0-2c92345825fd,Namespace:kube-system,Attempt:0,}" Sep 9 00:37:25.152459 kubelet[2077]: E0909 00:37:25.152141 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:25.166370 kubelet[2077]: I0909 00:37:25.166297 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bbngp" podStartSLOduration=6.127494677 podStartE2EDuration="15.166280504s" podCreationTimestamp="2025-09-09 00:37:10 +0000 UTC" firstStartedPulling="2025-09-09 00:37:11.160713471 +0000 UTC m=+5.178102080" lastFinishedPulling="2025-09-09 00:37:20.199499298 +0000 UTC m=+14.216887907" observedRunningTime="2025-09-09 00:37:25.165940147 +0000 UTC m=+19.183328836" watchObservedRunningTime="2025-09-09 00:37:25.166280504 +0000 UTC m=+19.183669153" Sep 9 00:37:26.155597 kubelet[2077]: E0909 00:37:26.155567 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:26.259817 systemd-networkd[1095]: cilium_host: Link UP Sep 9 00:37:26.262318 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 9 00:37:26.262352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 9 00:37:26.259934 systemd-networkd[1095]: cilium_net: Link UP Sep 9 00:37:26.261286 systemd-networkd[1095]: cilium_net: Gained carrier Sep 9 00:37:26.263601 systemd-networkd[1095]: cilium_host: Gained carrier Sep 9 00:37:26.263767 systemd-networkd[1095]: cilium_net: Gained IPv6LL Sep 9 00:37:26.263892 systemd-networkd[1095]: cilium_host: Gained IPv6LL Sep 9 00:37:26.358147 systemd-networkd[1095]: cilium_vxlan: Link UP Sep 9 00:37:26.358153 systemd-networkd[1095]: cilium_vxlan: Gained carrier Sep 9 00:37:26.632594 kernel: NET: Registered PF_ALG protocol family Sep 9 00:37:27.157549 kubelet[2077]: E0909 00:37:27.157492 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:27.280923 systemd-networkd[1095]: lxc_health: Link UP Sep 9 00:37:27.290795 systemd-networkd[1095]: lxc_health: Gained carrier Sep 9 00:37:27.291107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:37:27.745699 systemd-networkd[1095]: lxcc4bb33bf0333: Link UP Sep 9 00:37:27.756656 kernel: eth0: renamed from tmpaff89 Sep 9 00:37:27.779005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc4bb33bf0333: link becomes ready Sep 9 00:37:27.779898 systemd-networkd[1095]: lxcc4bb33bf0333: Gained carrier Sep 9 00:37:27.787807 systemd-networkd[1095]: lxc53633a734223: Link UP Sep 9 00:37:27.797980 kernel: eth0: renamed from tmp615c9 Sep 9 00:37:27.814425 systemd-networkd[1095]: lxc53633a734223: Gained carrier Sep 9 00:37:27.815093 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc53633a734223: link becomes ready Sep 9 00:37:28.226115 systemd-networkd[1095]: cilium_vxlan: Gained IPv6LL Sep 9 00:37:28.738122 systemd-networkd[1095]: lxc_health: Gained IPv6LL Sep 9 00:37:29.058168 systemd-networkd[1095]: lxcc4bb33bf0333: Gained IPv6LL Sep 9 00:37:29.109796 kubelet[2077]: E0909 00:37:29.109121 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:29.160075 kubelet[2077]: E0909 00:37:29.160039 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:29.762128 systemd-networkd[1095]: lxc53633a734223: Gained IPv6LL Sep 9 00:37:30.162234 kubelet[2077]: E0909 00:37:30.162167 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:31.551971 env[1317]: time="2025-09-09T00:37:31.548252337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:31.551971 env[1317]: time="2025-09-09T00:37:31.548314782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:31.551971 env[1317]: time="2025-09-09T00:37:31.548325503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:31.551971 env[1317]: time="2025-09-09T00:37:31.548553202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aff89a1a7dd8fc502dcdd1c9ac03af40206b7f1efb8c32e0576858bf7859687c pid=3301 runtime=io.containerd.runc.v2 Sep 9 00:37:31.598336 env[1317]: time="2025-09-09T00:37:31.594162173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:37:31.598336 env[1317]: time="2025-09-09T00:37:31.594206337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:37:31.598336 env[1317]: time="2025-09-09T00:37:31.594216658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:37:31.598336 env[1317]: time="2025-09-09T00:37:31.594360349Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/615c993cb5c919e0882a9d8324bb33f85c19dff2772be56a516f6a136e317780 pid=3335 runtime=io.containerd.runc.v2 Sep 9 00:37:31.600144 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:31.608614 systemd[1]: run-containerd-runc-k8s.io-615c993cb5c919e0882a9d8324bb33f85c19dff2772be56a516f6a136e317780-runc.x49HdD.mount: Deactivated successfully. Sep 9 00:37:31.627193 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:37:31.627504 env[1317]: time="2025-09-09T00:37:31.627467138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qlrrx,Uid:c53263ac-32e6-4d4b-adf0-2c92345825fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"aff89a1a7dd8fc502dcdd1c9ac03af40206b7f1efb8c32e0576858bf7859687c\"" Sep 9 00:37:31.628293 kubelet[2077]: E0909 00:37:31.628265 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:31.631581 env[1317]: time="2025-09-09T00:37:31.631533511Z" level=info msg="CreateContainer within sandbox \"aff89a1a7dd8fc502dcdd1c9ac03af40206b7f1efb8c32e0576858bf7859687c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:37:31.645781 env[1317]: time="2025-09-09T00:37:31.645735073Z" level=info msg="CreateContainer within sandbox \"aff89a1a7dd8fc502dcdd1c9ac03af40206b7f1efb8c32e0576858bf7859687c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d6bc97d745ffe871b62d98cd3ec6dc2c34ecc0c2f3289d98d94732b321de154\"" Sep 9 00:37:31.646881 env[1317]: time="2025-09-09T00:37:31.646829882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hmw8h,Uid:431c0807-ccf2-4266-840e-044e232bb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"615c993cb5c919e0882a9d8324bb33f85c19dff2772be56a516f6a136e317780\"" Sep 9 00:37:31.647221 env[1317]: time="2025-09-09T00:37:31.647174310Z" level=info msg="StartContainer for \"1d6bc97d745ffe871b62d98cd3ec6dc2c34ecc0c2f3289d98d94732b321de154\"" Sep 9 00:37:31.647730 kubelet[2077]: E0909 00:37:31.647687 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:31.649913 env[1317]: time="2025-09-09T00:37:31.649865771Z" level=info msg="CreateContainer within sandbox \"615c993cb5c919e0882a9d8324bb33f85c19dff2772be56a516f6a136e317780\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:37:31.665311 env[1317]: time="2025-09-09T00:37:31.665246429Z" level=info msg="CreateContainer within sandbox \"615c993cb5c919e0882a9d8324bb33f85c19dff2772be56a516f6a136e317780\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9732dfcd9a70d604e1ec6ec0c891347dd87715ccd964da1e8b14244b1a4ae44\"" Sep 9 00:37:31.666095 env[1317]: time="2025-09-09T00:37:31.666057815Z" level=info msg="StartContainer for \"b9732dfcd9a70d604e1ec6ec0c891347dd87715ccd964da1e8b14244b1a4ae44\"" Sep 9 00:37:31.704880 env[1317]: time="2025-09-09T00:37:31.704817427Z" level=info msg="StartContainer for \"1d6bc97d745ffe871b62d98cd3ec6dc2c34ecc0c2f3289d98d94732b321de154\" returns successfully" Sep 9 00:37:31.720519 env[1317]: time="2025-09-09T00:37:31.720292333Z" level=info msg="StartContainer for \"b9732dfcd9a70d604e1ec6ec0c891347dd87715ccd964da1e8b14244b1a4ae44\" returns successfully" Sep 9 00:37:32.167786 kubelet[2077]: E0909 00:37:32.167706 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:32.172983 kubelet[2077]: E0909 00:37:32.170152 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:32.185930 kubelet[2077]: I0909 00:37:32.185884 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hmw8h" podStartSLOduration=21.185852239 podStartE2EDuration="21.185852239s" podCreationTimestamp="2025-09-09 00:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:32.184883083 +0000 UTC m=+26.202271732" watchObservedRunningTime="2025-09-09 00:37:32.185852239 +0000 UTC m=+26.203240848" Sep 9 00:37:32.216016 kubelet[2077]: I0909 00:37:32.215936 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qlrrx" podStartSLOduration=21.215921238 podStartE2EDuration="21.215921238s" podCreationTimestamp="2025-09-09 00:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:37:32.215653017 +0000 UTC m=+26.233041666" watchObservedRunningTime="2025-09-09 00:37:32.215921238 +0000 UTC m=+26.233309887" Sep 9 00:37:32.661272 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:58968.service. Sep 9 00:37:32.723863 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 58968 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:32.725324 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:32.732088 systemd-logind[1303]: New session 6 of user core. Sep 9 00:37:32.732569 systemd[1]: Started session-6.scope. Sep 9 00:37:32.880744 sshd[3460]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:32.883457 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:58968.service: Deactivated successfully. Sep 9 00:37:32.884588 systemd-logind[1303]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:37:32.884783 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:37:32.885576 systemd-logind[1303]: Removed session 6. Sep 9 00:37:33.172372 kubelet[2077]: E0909 00:37:33.172268 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:33.172372 kubelet[2077]: E0909 00:37:33.172309 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:34.174380 kubelet[2077]: E0909 00:37:34.174326 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:34.174839 kubelet[2077]: E0909 00:37:34.174618 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:37:37.884009 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:58980.service. Sep 9 00:37:37.924733 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 58980 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:37.926140 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:37.931346 systemd-logind[1303]: New session 7 of user core. Sep 9 00:37:37.932154 systemd[1]: Started session-7.scope. Sep 9 00:37:38.049662 sshd[3475]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:38.052705 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:58980.service: Deactivated successfully. Sep 9 00:37:38.053635 systemd-logind[1303]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:37:38.053693 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:37:38.054485 systemd-logind[1303]: Removed session 7. Sep 9 00:37:43.053346 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:53028.service. Sep 9 00:37:43.097553 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 53028 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:43.098992 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:43.102807 systemd-logind[1303]: New session 8 of user core. Sep 9 00:37:43.103631 systemd[1]: Started session-8.scope. Sep 9 00:37:43.234083 sshd[3492]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:43.236573 systemd-logind[1303]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:37:43.236736 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:53028.service: Deactivated successfully. Sep 9 00:37:43.237957 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:37:43.238439 systemd-logind[1303]: Removed session 8. Sep 9 00:37:48.237103 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:53038.service. Sep 9 00:37:48.280128 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:48.281312 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:48.285094 systemd-logind[1303]: New session 9 of user core. Sep 9 00:37:48.285924 systemd[1]: Started session-9.scope. Sep 9 00:37:48.403901 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:53052.service. Sep 9 00:37:48.404055 sshd[3507]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:48.411168 systemd-logind[1303]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:37:48.411342 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:53038.service: Deactivated successfully. Sep 9 00:37:48.412112 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:37:48.412526 systemd-logind[1303]: Removed session 9. Sep 9 00:37:48.446018 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 53052 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:48.447259 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:48.451283 systemd-logind[1303]: New session 10 of user core. Sep 9 00:37:48.452299 systemd[1]: Started session-10.scope. Sep 9 00:37:48.622765 sshd[3521]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:48.627864 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:53066.service. Sep 9 00:37:48.630315 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:53052.service: Deactivated successfully. Sep 9 00:37:48.631171 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:37:48.634396 systemd-logind[1303]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:37:48.636473 systemd-logind[1303]: Removed session 10. Sep 9 00:37:48.675438 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 53066 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:48.676926 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:48.681378 systemd[1]: Started session-11.scope. Sep 9 00:37:48.681598 systemd-logind[1303]: New session 11 of user core. Sep 9 00:37:48.796683 sshd[3534]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:48.799525 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:53066.service: Deactivated successfully. Sep 9 00:37:48.800583 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:37:48.800900 systemd-logind[1303]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:37:48.801566 systemd-logind[1303]: Removed session 11. Sep 9 00:37:53.800306 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:33158.service. Sep 9 00:37:53.837588 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 33158 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:53.839026 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:53.842796 systemd-logind[1303]: New session 12 of user core. Sep 9 00:37:53.843762 systemd[1]: Started session-12.scope. Sep 9 00:37:53.955301 sshd[3550]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:53.957919 systemd-logind[1303]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:37:53.958163 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:33158.service: Deactivated successfully. Sep 9 00:37:53.958985 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:37:53.959378 systemd-logind[1303]: Removed session 12. Sep 9 00:37:58.961194 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:33160.service. Sep 9 00:37:59.001210 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 33160 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:59.002649 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:59.008276 systemd[1]: Started session-13.scope. Sep 9 00:37:59.008703 systemd-logind[1303]: New session 13 of user core. Sep 9 00:37:59.118605 sshd[3564]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:59.120925 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:33164.service. Sep 9 00:37:59.121643 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:33160.service: Deactivated successfully. Sep 9 00:37:59.122649 systemd-logind[1303]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:37:59.122706 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:37:59.124091 systemd-logind[1303]: Removed session 13. Sep 9 00:37:59.162601 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 33164 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:59.163904 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:59.167346 systemd-logind[1303]: New session 14 of user core. Sep 9 00:37:59.168090 systemd[1]: Started session-14.scope. Sep 9 00:37:59.341048 sshd[3577]: pam_unix(sshd:session): session closed for user core Sep 9 00:37:59.343279 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:33178.service. Sep 9 00:37:59.345055 systemd-logind[1303]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:37:59.345293 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:33164.service: Deactivated successfully. Sep 9 00:37:59.346082 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:37:59.346556 systemd-logind[1303]: Removed session 14. Sep 9 00:37:59.381562 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 33178 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:37:59.382925 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:37:59.386158 systemd-logind[1303]: New session 15 of user core. Sep 9 00:37:59.386932 systemd[1]: Started session-15.scope. Sep 9 00:38:00.546552 sshd[3589]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:00.549529 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:50394.service. Sep 9 00:38:00.556335 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:33178.service: Deactivated successfully. Sep 9 00:38:00.557816 systemd-logind[1303]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:38:00.557858 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:38:00.558923 systemd-logind[1303]: Removed session 15. Sep 9 00:38:00.597931 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 50394 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:00.599794 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:00.603192 systemd-logind[1303]: New session 16 of user core. Sep 9 00:38:00.603975 systemd[1]: Started session-16.scope. Sep 9 00:38:00.816382 sshd[3606]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:00.818853 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:50408.service. Sep 9 00:38:00.824880 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:50394.service: Deactivated successfully. Sep 9 00:38:00.825773 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:38:00.826451 systemd-logind[1303]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:38:00.828919 systemd-logind[1303]: Removed session 16. Sep 9 00:38:00.856538 sshd[3621]: Accepted publickey for core from 10.0.0.1 port 50408 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:00.858056 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:00.861225 systemd-logind[1303]: New session 17 of user core. Sep 9 00:38:00.862086 systemd[1]: Started session-17.scope. Sep 9 00:38:00.982920 sshd[3621]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:00.985676 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:50408.service: Deactivated successfully. Sep 9 00:38:00.986583 systemd-logind[1303]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:38:00.986654 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:38:00.987508 systemd-logind[1303]: Removed session 17. Sep 9 00:38:05.986745 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:50410.service. Sep 9 00:38:06.025218 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 50410 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:06.026836 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:06.030747 systemd-logind[1303]: New session 18 of user core. Sep 9 00:38:06.031190 systemd[1]: Started session-18.scope. Sep 9 00:38:06.161268 sshd[3641]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:06.163662 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:50410.service: Deactivated successfully. Sep 9 00:38:06.164808 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:38:06.164818 systemd-logind[1303]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:38:06.166564 systemd-logind[1303]: Removed session 18. Sep 9 00:38:11.164967 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:58732.service. Sep 9 00:38:11.215699 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 58732 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:11.217457 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:11.223619 systemd-logind[1303]: New session 19 of user core. Sep 9 00:38:11.224986 systemd[1]: Started session-19.scope. Sep 9 00:38:11.355640 sshd[3658]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:11.358279 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:58732.service: Deactivated successfully. Sep 9 00:38:11.359251 systemd-logind[1303]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:38:11.359305 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:38:11.360036 systemd-logind[1303]: Removed session 19. Sep 9 00:38:16.358977 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:58744.service. Sep 9 00:38:16.395530 sshd[3674]: Accepted publickey for core from 10.0.0.1 port 58744 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:16.396716 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:16.402847 systemd-logind[1303]: New session 20 of user core. Sep 9 00:38:16.404803 systemd[1]: Started session-20.scope. Sep 9 00:38:16.535060 sshd[3674]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:16.537526 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:58758.service. Sep 9 00:38:16.541648 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:58744.service: Deactivated successfully. Sep 9 00:38:16.542700 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:38:16.542713 systemd-logind[1303]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:38:16.543691 systemd-logind[1303]: Removed session 20. Sep 9 00:38:16.577258 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 58758 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:16.578311 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:16.582009 systemd-logind[1303]: New session 21 of user core. Sep 9 00:38:16.582417 systemd[1]: Started session-21.scope. Sep 9 00:38:19.030053 env[1317]: time="2025-09-09T00:38:19.030010524Z" level=info msg="StopContainer for \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\" with timeout 30 (s)" Sep 9 00:38:19.030467 env[1317]: time="2025-09-09T00:38:19.030317721Z" level=info msg="Stop container \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\" with signal terminated" Sep 9 00:38:19.043262 systemd[1]: run-containerd-runc-k8s.io-0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298-runc.mc1VdF.mount: Deactivated successfully. Sep 9 00:38:19.064278 env[1317]: time="2025-09-09T00:38:19.064211032Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:38:19.069433 env[1317]: time="2025-09-09T00:38:19.069392616Z" level=info msg="StopContainer for \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\" with timeout 2 (s)" Sep 9 00:38:19.069684 env[1317]: time="2025-09-09T00:38:19.069654333Z" level=info msg="Stop container \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\" with signal terminated" Sep 9 00:38:19.072302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a-rootfs.mount: Deactivated successfully. Sep 9 00:38:19.078745 systemd-networkd[1095]: lxc_health: Link DOWN Sep 9 00:38:19.078753 systemd-networkd[1095]: lxc_health: Lost carrier Sep 9 00:38:19.081527 env[1317]: time="2025-09-09T00:38:19.081487245Z" level=info msg="shim disconnected" id=4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a Sep 9 00:38:19.081682 env[1317]: time="2025-09-09T00:38:19.081662763Z" level=warning msg="cleaning up after shim disconnected" id=4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a namespace=k8s.io Sep 9 00:38:19.081762 env[1317]: time="2025-09-09T00:38:19.081748042Z" level=info msg="cleaning up dead shim" Sep 9 00:38:19.088645 env[1317]: time="2025-09-09T00:38:19.088609327Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3742 runtime=io.containerd.runc.v2\n" Sep 9 00:38:19.091208 env[1317]: time="2025-09-09T00:38:19.091172499Z" level=info msg="StopContainer for \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\" returns successfully" Sep 9 00:38:19.092225 env[1317]: time="2025-09-09T00:38:19.092194088Z" level=info msg="StopPodSandbox for \"51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3\"" Sep 9 00:38:19.092292 env[1317]: time="2025-09-09T00:38:19.092267127Z" level=info msg="Container to stop \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:38:19.094335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3-shm.mount: Deactivated successfully. Sep 9 00:38:19.118314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298-rootfs.mount: Deactivated successfully. Sep 9 00:38:19.127744 env[1317]: time="2025-09-09T00:38:19.127695222Z" level=info msg="shim disconnected" id=51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3 Sep 9 00:38:19.127744 env[1317]: time="2025-09-09T00:38:19.127733902Z" level=warning msg="cleaning up after shim disconnected" id=51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3 namespace=k8s.io Sep 9 00:38:19.127744 env[1317]: time="2025-09-09T00:38:19.127743062Z" level=info msg="cleaning up dead shim" Sep 9 00:38:19.128123 env[1317]: time="2025-09-09T00:38:19.128081298Z" level=info msg="shim disconnected" id=0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298 Sep 9 00:38:19.128282 env[1317]: time="2025-09-09T00:38:19.128255936Z" level=warning msg="cleaning up after shim disconnected" id=0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298 namespace=k8s.io Sep 9 00:38:19.128375 env[1317]: time="2025-09-09T00:38:19.128360055Z" level=info msg="cleaning up dead shim" Sep 9 00:38:19.134987 env[1317]: time="2025-09-09T00:38:19.134929624Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3792 runtime=io.containerd.runc.v2\n" Sep 9 00:38:19.135283 env[1317]: time="2025-09-09T00:38:19.135256260Z" level=info msg="TearDown network for sandbox \"51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3\" successfully" Sep 9 00:38:19.135317 env[1317]: time="2025-09-09T00:38:19.135281900Z" level=info msg="StopPodSandbox for \"51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3\" returns successfully" Sep 9 00:38:19.137509 env[1317]: time="2025-09-09T00:38:19.137479516Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3796 runtime=io.containerd.runc.v2\n" Sep 9 00:38:19.140649 env[1317]: time="2025-09-09T00:38:19.139968169Z" level=info msg="StopContainer for \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\" returns successfully" Sep 9 00:38:19.141354 env[1317]: time="2025-09-09T00:38:19.141064157Z" level=info msg="StopPodSandbox for \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\"" Sep 9 00:38:19.141354 env[1317]: time="2025-09-09T00:38:19.141145156Z" level=info msg="Container to stop \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:38:19.141354 env[1317]: time="2025-09-09T00:38:19.141161836Z" level=info msg="Container to stop \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:38:19.141354 env[1317]: time="2025-09-09T00:38:19.141172596Z" level=info msg="Container to stop \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:38:19.141354 env[1317]: time="2025-09-09T00:38:19.141184756Z" level=info msg="Container to stop \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:38:19.141354 env[1317]: time="2025-09-09T00:38:19.141195515Z" level=info msg="Container to stop \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:38:19.165099 env[1317]: time="2025-09-09T00:38:19.165054616Z" level=info msg="shim disconnected" id=dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b Sep 9 00:38:19.165284 env[1317]: time="2025-09-09T00:38:19.165104536Z" level=warning msg="cleaning up after shim disconnected" id=dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b namespace=k8s.io Sep 9 00:38:19.165284 env[1317]: time="2025-09-09T00:38:19.165115535Z" level=info msg="cleaning up dead shim" Sep 9 00:38:19.173194 env[1317]: time="2025-09-09T00:38:19.173148008Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3838 runtime=io.containerd.runc.v2\n" Sep 9 00:38:19.173479 env[1317]: time="2025-09-09T00:38:19.173449965Z" level=info msg="TearDown network for sandbox \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" successfully" Sep 9 00:38:19.173515 env[1317]: time="2025-09-09T00:38:19.173478485Z" level=info msg="StopPodSandbox for \"dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b\" returns successfully" Sep 9 00:38:19.271668 kubelet[2077]: I0909 00:38:19.271066 2077 scope.go:117] "RemoveContainer" containerID="4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a" Sep 9 00:38:19.274400 env[1317]: time="2025-09-09T00:38:19.274303508Z" level=info msg="RemoveContainer for \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\"" Sep 9 00:38:19.280696 env[1317]: time="2025-09-09T00:38:19.280600040Z" level=info msg="RemoveContainer for \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\" returns successfully" Sep 9 00:38:19.281638 kubelet[2077]: I0909 00:38:19.281609 2077 scope.go:117] "RemoveContainer" containerID="4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a" Sep 9 00:38:19.282030 env[1317]: time="2025-09-09T00:38:19.281941345Z" level=error msg="ContainerStatus for \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\": not found" Sep 9 00:38:19.282153 kubelet[2077]: E0909 00:38:19.282128 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\": not found" containerID="4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a" Sep 9 00:38:19.282243 kubelet[2077]: I0909 00:38:19.282162 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a"} err="failed to get container status \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b80f1b1d9658d59c2f6d6459b2958a1835bd6cd99cec0d055e77a6e859ccf9a\": not found" Sep 9 00:38:19.282283 kubelet[2077]: I0909 00:38:19.282242 2077 scope.go:117] "RemoveContainer" containerID="0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298" Sep 9 00:38:19.283770 env[1317]: time="2025-09-09T00:38:19.283721766Z" level=info msg="RemoveContainer for \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\"" Sep 9 00:38:19.287326 env[1317]: time="2025-09-09T00:38:19.287223168Z" level=info msg="RemoveContainer for \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\" returns successfully" Sep 9 00:38:19.287567 kubelet[2077]: I0909 00:38:19.287533 2077 scope.go:117] "RemoveContainer" containerID="eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106" Sep 9 00:38:19.289339 env[1317]: time="2025-09-09T00:38:19.288778671Z" level=info msg="RemoveContainer for \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\"" Sep 9 00:38:19.295522 env[1317]: time="2025-09-09T00:38:19.295481718Z" level=info msg="RemoveContainer for \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\" returns successfully" Sep 9 00:38:19.295878 kubelet[2077]: I0909 00:38:19.295780 2077 scope.go:117] "RemoveContainer" containerID="ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691" Sep 9 00:38:19.296862 env[1317]: time="2025-09-09T00:38:19.296830343Z" level=info msg="RemoveContainer for \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\"" Sep 9 00:38:19.299239 env[1317]: time="2025-09-09T00:38:19.299196478Z" level=info msg="RemoveContainer for \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\" returns successfully" Sep 9 00:38:19.299506 kubelet[2077]: I0909 00:38:19.299401 2077 scope.go:117] "RemoveContainer" containerID="54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6" Sep 9 00:38:19.300367 env[1317]: time="2025-09-09T00:38:19.300338625Z" level=info msg="RemoveContainer for \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\"" Sep 9 00:38:19.302832 env[1317]: time="2025-09-09T00:38:19.302788399Z" level=info msg="RemoveContainer for \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\" returns successfully" Sep 9 00:38:19.303031 kubelet[2077]: I0909 00:38:19.303010 2077 scope.go:117] "RemoveContainer" containerID="21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844" Sep 9 00:38:19.304219 env[1317]: time="2025-09-09T00:38:19.304189423Z" level=info msg="RemoveContainer for \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\"" Sep 9 00:38:19.306538 env[1317]: time="2025-09-09T00:38:19.306496318Z" level=info msg="RemoveContainer for \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\" returns successfully" Sep 9 00:38:19.306742 kubelet[2077]: I0909 00:38:19.306717 2077 scope.go:117] "RemoveContainer" containerID="0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298" Sep 9 00:38:19.307005 env[1317]: time="2025-09-09T00:38:19.306927354Z" level=error msg="ContainerStatus for \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\": not found" Sep 9 00:38:19.307142 kubelet[2077]: E0909 00:38:19.307115 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\": not found" containerID="0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298" Sep 9 00:38:19.307189 kubelet[2077]: I0909 00:38:19.307147 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298"} err="failed to get container status \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e57d3aabe1de06b729b8b02ff62536277f083e5de95e9734ec194b087505298\": not found" Sep 9 00:38:19.307189 kubelet[2077]: I0909 00:38:19.307169 2077 scope.go:117] "RemoveContainer" containerID="eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106" Sep 9 00:38:19.307475 env[1317]: time="2025-09-09T00:38:19.307383429Z" level=error msg="ContainerStatus for \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\": not found" Sep 9 00:38:19.307686 kubelet[2077]: E0909 00:38:19.307663 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\": not found" containerID="eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106" Sep 9 00:38:19.307760 kubelet[2077]: I0909 00:38:19.307687 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106"} err="failed to get container status \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\": rpc error: code = NotFound desc = an error occurred when try to find container \"eed975b539c172fc34ecc2465c46ab67d30e99eeb0d9420c0597e1cb4c267106\": not found" Sep 9 00:38:19.307760 kubelet[2077]: I0909 00:38:19.307702 2077 scope.go:117] "RemoveContainer" containerID="ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691" Sep 9 00:38:19.307937 env[1317]: time="2025-09-09T00:38:19.307885983Z" level=error msg="ContainerStatus for \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\": not found" Sep 9 00:38:19.308100 kubelet[2077]: E0909 00:38:19.308079 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\": not found" containerID="ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691" Sep 9 00:38:19.308210 kubelet[2077]: I0909 00:38:19.308187 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691"} err="failed to get container status \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef7335b9f1376b12a230d4dc06ab0f28c695e8c657a163adb423c57308d37691\": not found" Sep 9 00:38:19.308281 kubelet[2077]: I0909 00:38:19.308269 2077 scope.go:117] "RemoveContainer" containerID="54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6" Sep 9 00:38:19.308504 env[1317]: time="2025-09-09T00:38:19.308460737Z" level=error msg="ContainerStatus for \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\": not found" Sep 9 00:38:19.308721 kubelet[2077]: E0909 00:38:19.308702 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\": not found" containerID="54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6" Sep 9 00:38:19.308812 kubelet[2077]: I0909 00:38:19.308791 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6"} err="failed to get container status \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"54d60635f2c297e918cd11a0570b2dd06726a17b9616b32c7cf3749840f252c6\": not found" Sep 9 00:38:19.308878 kubelet[2077]: I0909 00:38:19.308866 2077 scope.go:117] "RemoveContainer" containerID="21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844" Sep 9 00:38:19.309153 env[1317]: time="2025-09-09T00:38:19.309109890Z" level=error msg="ContainerStatus for \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\": not found" Sep 9 00:38:19.309307 kubelet[2077]: E0909 00:38:19.309288 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\": not found" containerID="21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844" Sep 9 00:38:19.309356 kubelet[2077]: I0909 00:38:19.309310 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844"} err="failed to get container status \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\": rpc error: code = NotFound desc = an error occurred when try to find container \"21417259e6970173332a143a37a7a0712aa35576819f533b24f018e04a734844\": not found" Sep 9 00:38:19.322831 kubelet[2077]: I0909 00:38:19.322800 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-xtables-lock\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.322991 kubelet[2077]: I0909 00:38:19.322973 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-run\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323126 kubelet[2077]: I0909 00:38:19.323109 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-kernel\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323272 kubelet[2077]: I0909 00:38:19.323256 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-net\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323358 kubelet[2077]: I0909 00:38:19.323343 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/890d621f-4bd4-4cfb-86e8-25283278fd27-clustermesh-secrets\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323462 kubelet[2077]: I0909 00:38:19.323443 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxnp8\" (UniqueName: \"kubernetes.io/projected/ef734df2-8351-4413-8d47-fd4e3df1cd89-kube-api-access-wxnp8\") pod \"ef734df2-8351-4413-8d47-fd4e3df1cd89\" (UID: \"ef734df2-8351-4413-8d47-fd4e3df1cd89\") " Sep 9 00:38:19.323557 kubelet[2077]: I0909 00:38:19.323532 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-cgroup\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323638 kubelet[2077]: I0909 00:38:19.323624 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-bpf-maps\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323725 kubelet[2077]: I0909 00:38:19.323711 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg7wz\" (UniqueName: \"kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-kube-api-access-jg7wz\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323795 kubelet[2077]: I0909 00:38:19.323781 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cni-path\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323859 kubelet[2077]: I0909 00:38:19.323848 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-hubble-tls\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.323921 kubelet[2077]: I0909 00:38:19.323909 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-hostproc\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.324015 kubelet[2077]: I0909 00:38:19.323999 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-etc-cni-netd\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.324102 kubelet[2077]: I0909 00:38:19.324088 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef734df2-8351-4413-8d47-fd4e3df1cd89-cilium-config-path\") pod \"ef734df2-8351-4413-8d47-fd4e3df1cd89\" (UID: \"ef734df2-8351-4413-8d47-fd4e3df1cd89\") " Sep 9 00:38:19.324247 kubelet[2077]: I0909 00:38:19.324231 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-lib-modules\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.324329 kubelet[2077]: I0909 00:38:19.324315 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-config-path\") pod \"890d621f-4bd4-4cfb-86e8-25283278fd27\" (UID: \"890d621f-4bd4-4cfb-86e8-25283278fd27\") " Sep 9 00:38:19.325053 kubelet[2077]: I0909 00:38:19.325020 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.325127 kubelet[2077]: I0909 00:38:19.325028 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.325127 kubelet[2077]: I0909 00:38:19.325029 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.325229 kubelet[2077]: I0909 00:38:19.325207 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.325380 kubelet[2077]: I0909 00:38:19.325341 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.326154 kubelet[2077]: I0909 00:38:19.326092 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-hostproc" (OuterVolumeSpecName: "hostproc") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.326154 kubelet[2077]: I0909 00:38:19.326123 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.328031 kubelet[2077]: I0909 00:38:19.327661 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.328031 kubelet[2077]: I0909 00:38:19.327737 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:38:19.328031 kubelet[2077]: I0909 00:38:19.327762 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.328238 kubelet[2077]: I0909 00:38:19.328200 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cni-path" (OuterVolumeSpecName: "cni-path") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:19.329184 kubelet[2077]: I0909 00:38:19.329141 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef734df2-8351-4413-8d47-fd4e3df1cd89-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef734df2-8351-4413-8d47-fd4e3df1cd89" (UID: "ef734df2-8351-4413-8d47-fd4e3df1cd89"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:38:19.329689 kubelet[2077]: I0909 00:38:19.329658 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:38:19.329854 kubelet[2077]: I0909 00:38:19.329829 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-kube-api-access-jg7wz" (OuterVolumeSpecName: "kube-api-access-jg7wz") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "kube-api-access-jg7wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:38:19.330671 kubelet[2077]: I0909 00:38:19.330646 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890d621f-4bd4-4cfb-86e8-25283278fd27-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "890d621f-4bd4-4cfb-86e8-25283278fd27" (UID: "890d621f-4bd4-4cfb-86e8-25283278fd27"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:38:19.331218 kubelet[2077]: I0909 00:38:19.331189 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef734df2-8351-4413-8d47-fd4e3df1cd89-kube-api-access-wxnp8" (OuterVolumeSpecName: "kube-api-access-wxnp8") pod "ef734df2-8351-4413-8d47-fd4e3df1cd89" (UID: "ef734df2-8351-4413-8d47-fd4e3df1cd89"). InnerVolumeSpecName "kube-api-access-wxnp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:38:19.425450 kubelet[2077]: I0909 00:38:19.425396 2077 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425450 kubelet[2077]: I0909 00:38:19.425462 2077 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425477 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef734df2-8351-4413-8d47-fd4e3df1cd89-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425486 2077 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425496 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425504 2077 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425512 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425521 2077 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425531 2077 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.425677 kubelet[2077]: I0909 00:38:19.425551 2077 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/890d621f-4bd4-4cfb-86e8-25283278fd27-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.427089 kubelet[2077]: I0909 00:38:19.425562 2077 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxnp8\" (UniqueName: \"kubernetes.io/projected/ef734df2-8351-4413-8d47-fd4e3df1cd89-kube-api-access-wxnp8\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.427089 kubelet[2077]: I0909 00:38:19.425572 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.427089 kubelet[2077]: I0909 00:38:19.425580 2077 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.427089 kubelet[2077]: I0909 00:38:19.425590 2077 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg7wz\" (UniqueName: \"kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-kube-api-access-jg7wz\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.427089 kubelet[2077]: I0909 00:38:19.425598 2077 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/890d621f-4bd4-4cfb-86e8-25283278fd27-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:19.427089 kubelet[2077]: I0909 00:38:19.425606 2077 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/890d621f-4bd4-4cfb-86e8-25283278fd27-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:20.037942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51567a90d9acea94e97d3e257f5ae0ba9f8b4ba38607119e574cfa56f81f93f3-rootfs.mount: Deactivated successfully. Sep 9 00:38:20.038122 systemd[1]: var-lib-kubelet-pods-ef734df2\x2d8351\x2d4413\x2d8d47\x2dfd4e3df1cd89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwxnp8.mount: Deactivated successfully. Sep 9 00:38:20.038205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b-rootfs.mount: Deactivated successfully. Sep 9 00:38:20.038277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd9683958a2b320879daad5782c48dc5c04f9641eaf3c7748abe9f53a689c71b-shm.mount: Deactivated successfully. Sep 9 00:38:20.038351 systemd[1]: var-lib-kubelet-pods-890d621f\x2d4bd4\x2d4cfb\x2d86e8\x2d25283278fd27-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djg7wz.mount: Deactivated successfully. Sep 9 00:38:20.038428 systemd[1]: var-lib-kubelet-pods-890d621f\x2d4bd4\x2d4cfb\x2d86e8\x2d25283278fd27-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:38:20.038506 systemd[1]: var-lib-kubelet-pods-890d621f\x2d4bd4\x2d4cfb\x2d86e8\x2d25283278fd27-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:38:20.090873 kubelet[2077]: I0909 00:38:20.090813 2077 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="890d621f-4bd4-4cfb-86e8-25283278fd27" path="/var/lib/kubelet/pods/890d621f-4bd4-4cfb-86e8-25283278fd27/volumes" Sep 9 00:38:20.091428 kubelet[2077]: I0909 00:38:20.091394 2077 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef734df2-8351-4413-8d47-fd4e3df1cd89" path="/var/lib/kubelet/pods/ef734df2-8351-4413-8d47-fd4e3df1cd89/volumes" Sep 9 00:38:20.973648 sshd[3686]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:20.975849 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:57218.service. Sep 9 00:38:20.976748 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:58758.service: Deactivated successfully. Sep 9 00:38:20.977719 systemd-logind[1303]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:38:20.977738 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:38:20.979070 systemd-logind[1303]: Removed session 21. Sep 9 00:38:21.014503 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 57218 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:21.015734 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:21.019480 systemd-logind[1303]: New session 22 of user core. Sep 9 00:38:21.019943 systemd[1]: Started session-22.scope. Sep 9 00:38:21.089365 kubelet[2077]: E0909 00:38:21.089335 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:21.090975 kubelet[2077]: E0909 00:38:21.089361 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:21.137177 kubelet[2077]: E0909 00:38:21.137107 2077 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:38:22.329085 sshd[3855]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:22.332117 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:57226.service. Sep 9 00:38:22.334661 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:57218.service: Deactivated successfully. Sep 9 00:38:22.341303 systemd-logind[1303]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:38:22.341347 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:38:22.352137 systemd-logind[1303]: Removed session 22. Sep 9 00:38:22.352658 kubelet[2077]: E0909 00:38:22.352613 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="890d621f-4bd4-4cfb-86e8-25283278fd27" containerName="apply-sysctl-overwrites" Sep 9 00:38:22.352658 kubelet[2077]: E0909 00:38:22.352657 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="890d621f-4bd4-4cfb-86e8-25283278fd27" containerName="clean-cilium-state" Sep 9 00:38:22.352969 kubelet[2077]: E0909 00:38:22.352666 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="890d621f-4bd4-4cfb-86e8-25283278fd27" containerName="mount-cgroup" Sep 9 00:38:22.352969 kubelet[2077]: E0909 00:38:22.352673 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef734df2-8351-4413-8d47-fd4e3df1cd89" containerName="cilium-operator" Sep 9 00:38:22.352969 kubelet[2077]: E0909 00:38:22.352679 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="890d621f-4bd4-4cfb-86e8-25283278fd27" containerName="mount-bpf-fs" Sep 9 00:38:22.352969 kubelet[2077]: E0909 00:38:22.352684 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="890d621f-4bd4-4cfb-86e8-25283278fd27" containerName="cilium-agent" Sep 9 00:38:22.352969 kubelet[2077]: I0909 00:38:22.352722 2077 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef734df2-8351-4413-8d47-fd4e3df1cd89" containerName="cilium-operator" Sep 9 00:38:22.352969 kubelet[2077]: I0909 00:38:22.352729 2077 memory_manager.go:354] "RemoveStaleState removing state" podUID="890d621f-4bd4-4cfb-86e8-25283278fd27" containerName="cilium-agent" Sep 9 00:38:22.394587 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 57226 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:22.396568 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:22.400683 systemd-logind[1303]: New session 23 of user core. Sep 9 00:38:22.401139 systemd[1]: Started session-23.scope. Sep 9 00:38:22.532869 sshd[3869]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:22.535447 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:57232.service. Sep 9 00:38:22.537349 systemd-logind[1303]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:38:22.537471 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:57226.service: Deactivated successfully. Sep 9 00:38:22.538452 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:38:22.539016 systemd-logind[1303]: Removed session 23. Sep 9 00:38:22.545737 kubelet[2077]: I0909 00:38:22.545371 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-ipsec-secrets\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.545737 kubelet[2077]: I0909 00:38:22.545434 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-bpf-maps\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.545737 kubelet[2077]: I0909 00:38:22.545456 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-etc-cni-netd\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.545737 kubelet[2077]: I0909 00:38:22.545487 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-net\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.545737 kubelet[2077]: I0909 00:38:22.545515 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hubble-tls\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.545737 kubelet[2077]: I0909 00:38:22.545535 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq7n2\" (UniqueName: \"kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-kube-api-access-bq7n2\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546033 kubelet[2077]: I0909 00:38:22.545567 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-clustermesh-secrets\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546033 kubelet[2077]: I0909 00:38:22.545586 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-cgroup\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546033 kubelet[2077]: I0909 00:38:22.545603 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cni-path\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546033 kubelet[2077]: I0909 00:38:22.545619 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-kernel\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546033 kubelet[2077]: I0909 00:38:22.545649 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hostproc\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546033 kubelet[2077]: I0909 00:38:22.545666 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-run\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546174 kubelet[2077]: I0909 00:38:22.545681 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-lib-modules\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546174 kubelet[2077]: I0909 00:38:22.545711 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-xtables-lock\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.546174 kubelet[2077]: I0909 00:38:22.545750 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-config-path\") pod \"cilium-rc6ct\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " pod="kube-system/cilium-rc6ct" Sep 9 00:38:22.547541 kubelet[2077]: E0909 00:38:22.547463 2077 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-bq7n2 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-rc6ct" podUID="1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" Sep 9 00:38:22.584585 sshd[3883]: Accepted publickey for core from 10.0.0.1 port 57232 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:38:22.585793 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:38:22.590477 systemd-logind[1303]: New session 24 of user core. Sep 9 00:38:22.591397 systemd[1]: Started session-24.scope. Sep 9 00:38:23.089417 kubelet[2077]: E0909 00:38:23.089384 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:23.450905 kubelet[2077]: I0909 00:38:23.450797 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-run\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.450905 kubelet[2077]: I0909 00:38:23.450848 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-config-path\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.450905 kubelet[2077]: I0909 00:38:23.450872 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-etc-cni-netd\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.450905 kubelet[2077]: I0909 00:38:23.450892 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-bpf-maps\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.450905 kubelet[2077]: I0909 00:38:23.450908 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hostproc\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451405 kubelet[2077]: I0909 00:38:23.450923 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-xtables-lock\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451405 kubelet[2077]: I0909 00:38:23.450939 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-cgroup\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451405 kubelet[2077]: I0909 00:38:23.450973 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-clustermesh-secrets\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451405 kubelet[2077]: I0909 00:38:23.450990 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-kernel\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451405 kubelet[2077]: I0909 00:38:23.451007 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq7n2\" (UniqueName: \"kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-kube-api-access-bq7n2\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451405 kubelet[2077]: I0909 00:38:23.451028 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-ipsec-secrets\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451563 kubelet[2077]: I0909 00:38:23.451044 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hubble-tls\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451563 kubelet[2077]: I0909 00:38:23.451060 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-net\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451563 kubelet[2077]: I0909 00:38:23.451084 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cni-path\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451563 kubelet[2077]: I0909 00:38:23.451101 2077 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-lib-modules\") pod \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\" (UID: \"1e4d07a9-2f0d-4a03-bef6-d424e382c3bd\") " Sep 9 00:38:23.451563 kubelet[2077]: I0909 00:38:23.451168 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.451563 kubelet[2077]: I0909 00:38:23.451193 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.451853 kubelet[2077]: I0909 00:38:23.451821 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452033 kubelet[2077]: I0909 00:38:23.452000 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452095 kubelet[2077]: I0909 00:38:23.452039 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452095 kubelet[2077]: I0909 00:38:23.452056 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452095 kubelet[2077]: I0909 00:38:23.452071 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452095 kubelet[2077]: I0909 00:38:23.452084 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452191 kubelet[2077]: I0909 00:38:23.452098 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452191 kubelet[2077]: I0909 00:38:23.452111 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:38:23.452988 kubelet[2077]: I0909 00:38:23.452942 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:38:23.457719 kubelet[2077]: I0909 00:38:23.457602 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-kube-api-access-bq7n2" (OuterVolumeSpecName: "kube-api-access-bq7n2") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "kube-api-access-bq7n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:38:23.457719 kubelet[2077]: I0909 00:38:23.457671 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:38:23.458214 systemd[1]: var-lib-kubelet-pods-1e4d07a9\x2d2f0d\x2d4a03\x2dbef6\x2dd424e382c3bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbq7n2.mount: Deactivated successfully. Sep 9 00:38:23.458363 systemd[1]: var-lib-kubelet-pods-1e4d07a9\x2d2f0d\x2d4a03\x2dbef6\x2dd424e382c3bd-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 9 00:38:23.458594 kubelet[2077]: I0909 00:38:23.458566 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:38:23.458768 kubelet[2077]: I0909 00:38:23.458750 2077 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" (UID: "1e4d07a9-2f0d-4a03-bef6-d424e382c3bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:38:23.461577 systemd[1]: var-lib-kubelet-pods-1e4d07a9\x2d2f0d\x2d4a03\x2dbef6\x2dd424e382c3bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:38:23.461712 systemd[1]: var-lib-kubelet-pods-1e4d07a9\x2d2f0d\x2d4a03\x2dbef6\x2dd424e382c3bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:38:23.551689 kubelet[2077]: I0909 00:38:23.551645 2077 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551689 kubelet[2077]: I0909 00:38:23.551677 2077 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551689 kubelet[2077]: I0909 00:38:23.551689 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551689 kubelet[2077]: I0909 00:38:23.551698 2077 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551708 2077 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551716 2077 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq7n2\" (UniqueName: \"kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-kube-api-access-bq7n2\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551725 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551732 2077 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551740 2077 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551748 2077 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551755 2077 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.551938 kubelet[2077]: I0909 00:38:23.551763 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.552143 kubelet[2077]: I0909 00:38:23.551779 2077 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.552143 kubelet[2077]: I0909 00:38:23.551788 2077 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:23.552143 kubelet[2077]: I0909 00:38:23.551795 2077 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:38:24.457295 kubelet[2077]: I0909 00:38:24.457252 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-cilium-run\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.457729 kubelet[2077]: I0909 00:38:24.457713 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-bpf-maps\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.457807 kubelet[2077]: I0909 00:38:24.457793 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-lib-modules\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.457876 kubelet[2077]: I0909 00:38:24.457862 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-clustermesh-secrets\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458005 kubelet[2077]: I0909 00:38:24.457990 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-xtables-lock\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458099 kubelet[2077]: I0909 00:38:24.458085 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-host-proc-sys-kernel\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458168 kubelet[2077]: I0909 00:38:24.458156 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-cilium-ipsec-secrets\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458241 kubelet[2077]: I0909 00:38:24.458227 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-hostproc\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458322 kubelet[2077]: I0909 00:38:24.458309 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-cilium-config-path\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458401 kubelet[2077]: I0909 00:38:24.458387 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-host-proc-sys-net\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458496 kubelet[2077]: I0909 00:38:24.458467 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-etc-cni-netd\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458571 kubelet[2077]: I0909 00:38:24.458558 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-cni-path\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458638 kubelet[2077]: I0909 00:38:24.458624 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-hubble-tls\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458709 kubelet[2077]: I0909 00:38:24.458696 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvlf\" (UniqueName: \"kubernetes.io/projected/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-kube-api-access-kxvlf\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.458791 kubelet[2077]: I0909 00:38:24.458779 2077 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37094f9c-d26e-48a8-92a6-3bf1d5c543a4-cilium-cgroup\") pod \"cilium-jvvvs\" (UID: \"37094f9c-d26e-48a8-92a6-3bf1d5c543a4\") " pod="kube-system/cilium-jvvvs" Sep 9 00:38:24.637225 kubelet[2077]: E0909 00:38:24.637193 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:24.638847 env[1317]: time="2025-09-09T00:38:24.638121410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvvvs,Uid:37094f9c-d26e-48a8-92a6-3bf1d5c543a4,Namespace:kube-system,Attempt:0,}" Sep 9 00:38:24.656870 env[1317]: time="2025-09-09T00:38:24.656799980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:38:24.657017 env[1317]: time="2025-09-09T00:38:24.656842460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:38:24.657017 env[1317]: time="2025-09-09T00:38:24.656853500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:38:24.657298 env[1317]: time="2025-09-09T00:38:24.657265697Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d pid=3916 runtime=io.containerd.runc.v2 Sep 9 00:38:24.706883 env[1317]: time="2025-09-09T00:38:24.706821564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvvvs,Uid:37094f9c-d26e-48a8-92a6-3bf1d5c543a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\"" Sep 9 00:38:24.708744 kubelet[2077]: E0909 00:38:24.707579 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:24.709811 env[1317]: time="2025-09-09T00:38:24.709676948Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:38:24.720388 env[1317]: time="2025-09-09T00:38:24.720348565Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c79342da0b4e9cbb7e9b94e63ef5b6a6b66ec33e557647a6ceb5dbec02b93b26\"" Sep 9 00:38:24.720831 env[1317]: time="2025-09-09T00:38:24.720792682Z" level=info msg="StartContainer for \"c79342da0b4e9cbb7e9b94e63ef5b6a6b66ec33e557647a6ceb5dbec02b93b26\"" Sep 9 00:38:24.778175 env[1317]: time="2025-09-09T00:38:24.778108103Z" level=info msg="StartContainer for \"c79342da0b4e9cbb7e9b94e63ef5b6a6b66ec33e557647a6ceb5dbec02b93b26\" returns successfully" Sep 9 00:38:24.810905 env[1317]: time="2025-09-09T00:38:24.810845190Z" level=info msg="shim disconnected" id=c79342da0b4e9cbb7e9b94e63ef5b6a6b66ec33e557647a6ceb5dbec02b93b26 Sep 9 00:38:24.810905 env[1317]: time="2025-09-09T00:38:24.810893469Z" level=warning msg="cleaning up after shim disconnected" id=c79342da0b4e9cbb7e9b94e63ef5b6a6b66ec33e557647a6ceb5dbec02b93b26 namespace=k8s.io Sep 9 00:38:24.810905 env[1317]: time="2025-09-09T00:38:24.810903149Z" level=info msg="cleaning up dead shim" Sep 9 00:38:24.817607 env[1317]: time="2025-09-09T00:38:24.817546750Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4002 runtime=io.containerd.runc.v2\n" Sep 9 00:38:25.293672 kubelet[2077]: E0909 00:38:25.293620 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:25.296647 env[1317]: time="2025-09-09T00:38:25.296607128Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:38:25.313460 env[1317]: time="2025-09-09T00:38:25.313383564Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34345baea625d859e3ac9fd45eddd6f6ecc79a50aba43d5f432b2371ab4096ab\"" Sep 9 00:38:25.314135 env[1317]: time="2025-09-09T00:38:25.314100800Z" level=info msg="StartContainer for \"34345baea625d859e3ac9fd45eddd6f6ecc79a50aba43d5f432b2371ab4096ab\"" Sep 9 00:38:25.369435 env[1317]: time="2025-09-09T00:38:25.369387843Z" level=info msg="StartContainer for \"34345baea625d859e3ac9fd45eddd6f6ecc79a50aba43d5f432b2371ab4096ab\" returns successfully" Sep 9 00:38:25.396974 env[1317]: time="2025-09-09T00:38:25.396916025Z" level=info msg="shim disconnected" id=34345baea625d859e3ac9fd45eddd6f6ecc79a50aba43d5f432b2371ab4096ab Sep 9 00:38:25.397216 env[1317]: time="2025-09-09T00:38:25.397195584Z" level=warning msg="cleaning up after shim disconnected" id=34345baea625d859e3ac9fd45eddd6f6ecc79a50aba43d5f432b2371ab4096ab namespace=k8s.io Sep 9 00:38:25.397281 env[1317]: time="2025-09-09T00:38:25.397267463Z" level=info msg="cleaning up dead shim" Sep 9 00:38:25.405636 env[1317]: time="2025-09-09T00:38:25.405592862Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Sep 9 00:38:26.092012 kubelet[2077]: I0909 00:38:26.091970 2077 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4d07a9-2f0d-4a03-bef6-d424e382c3bd" path="/var/lib/kubelet/pods/1e4d07a9-2f0d-4a03-bef6-d424e382c3bd/volumes" Sep 9 00:38:26.138419 kubelet[2077]: E0909 00:38:26.138383 2077 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:38:26.295878 kubelet[2077]: E0909 00:38:26.295833 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:26.300535 env[1317]: time="2025-09-09T00:38:26.300067274Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:38:26.315874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690253486.mount: Deactivated successfully. Sep 9 00:38:26.320477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279172168.mount: Deactivated successfully. Sep 9 00:38:26.323276 env[1317]: time="2025-09-09T00:38:26.323231459Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"354e0427ac65d1bd930e3dc0e37238121816c5be0b6d21bd93eaa59ef32611ae\"" Sep 9 00:38:26.325204 env[1317]: time="2025-09-09T00:38:26.325175091Z" level=info msg="StartContainer for \"354e0427ac65d1bd930e3dc0e37238121816c5be0b6d21bd93eaa59ef32611ae\"" Sep 9 00:38:26.375422 env[1317]: time="2025-09-09T00:38:26.375311923Z" level=info msg="StartContainer for \"354e0427ac65d1bd930e3dc0e37238121816c5be0b6d21bd93eaa59ef32611ae\" returns successfully" Sep 9 00:38:26.398649 env[1317]: time="2025-09-09T00:38:26.398602867Z" level=info msg="shim disconnected" id=354e0427ac65d1bd930e3dc0e37238121816c5be0b6d21bd93eaa59ef32611ae Sep 9 00:38:26.398864 env[1317]: time="2025-09-09T00:38:26.398845746Z" level=warning msg="cleaning up after shim disconnected" id=354e0427ac65d1bd930e3dc0e37238121816c5be0b6d21bd93eaa59ef32611ae namespace=k8s.io Sep 9 00:38:26.398957 env[1317]: time="2025-09-09T00:38:26.398933506Z" level=info msg="cleaning up dead shim" Sep 9 00:38:26.405366 env[1317]: time="2025-09-09T00:38:26.405327799Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Sep 9 00:38:27.300171 kubelet[2077]: E0909 00:38:27.299819 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:27.313751 env[1317]: time="2025-09-09T00:38:27.313703826Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:38:27.324572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771216726.mount: Deactivated successfully. Sep 9 00:38:27.329335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645631194.mount: Deactivated successfully. Sep 9 00:38:27.332507 env[1317]: time="2025-09-09T00:38:27.332456525Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"202936e510205af3bd56d6e37190994d94de078539d0bf2e7cf361ffc66e44ff\"" Sep 9 00:38:27.333290 env[1317]: time="2025-09-09T00:38:27.333259282Z" level=info msg="StartContainer for \"202936e510205af3bd56d6e37190994d94de078539d0bf2e7cf361ffc66e44ff\"" Sep 9 00:38:27.378650 env[1317]: time="2025-09-09T00:38:27.378578933Z" level=info msg="StartContainer for \"202936e510205af3bd56d6e37190994d94de078539d0bf2e7cf361ffc66e44ff\" returns successfully" Sep 9 00:38:27.396921 env[1317]: time="2025-09-09T00:38:27.396861713Z" level=info msg="shim disconnected" id=202936e510205af3bd56d6e37190994d94de078539d0bf2e7cf361ffc66e44ff Sep 9 00:38:27.396921 env[1317]: time="2025-09-09T00:38:27.396908033Z" level=warning msg="cleaning up after shim disconnected" id=202936e510205af3bd56d6e37190994d94de078539d0bf2e7cf361ffc66e44ff namespace=k8s.io Sep 9 00:38:27.396921 env[1317]: time="2025-09-09T00:38:27.396918633Z" level=info msg="cleaning up dead shim" Sep 9 00:38:27.403625 env[1317]: time="2025-09-09T00:38:27.403588251Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:38:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4178 runtime=io.containerd.runc.v2\n" Sep 9 00:38:28.162629 kubelet[2077]: I0909 00:38:28.162160 2077 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:38:28Z","lastTransitionTime":"2025-09-09T00:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:38:28.304212 kubelet[2077]: E0909 00:38:28.304169 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:28.306206 env[1317]: time="2025-09-09T00:38:28.306162165Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:38:28.316579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288005578.mount: Deactivated successfully. Sep 9 00:38:28.317321 env[1317]: time="2025-09-09T00:38:28.317288738Z" level=info msg="CreateContainer within sandbox \"4942c452d2dcbad5f0497336af65d7d491145778223b96f985ac5f5f0a268b9d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95821ca89b450d963c0b4b50a7ccd711e13ecb7415311c306a3a37acc22e1fff\"" Sep 9 00:38:28.318105 env[1317]: time="2025-09-09T00:38:28.318077416Z" level=info msg="StartContainer for \"95821ca89b450d963c0b4b50a7ccd711e13ecb7415311c306a3a37acc22e1fff\"" Sep 9 00:38:28.374453 env[1317]: time="2025-09-09T00:38:28.374400597Z" level=info msg="StartContainer for \"95821ca89b450d963c0b4b50a7ccd711e13ecb7415311c306a3a37acc22e1fff\" returns successfully" Sep 9 00:38:28.625977 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 9 00:38:29.311207 kubelet[2077]: E0909 00:38:29.310868 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:29.325322 kubelet[2077]: I0909 00:38:29.324980 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jvvvs" podStartSLOduration=5.324963546 podStartE2EDuration="5.324963546s" podCreationTimestamp="2025-09-09 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:38:29.324202628 +0000 UTC m=+83.341591277" watchObservedRunningTime="2025-09-09 00:38:29.324963546 +0000 UTC m=+83.342352195" Sep 9 00:38:30.638851 kubelet[2077]: E0909 00:38:30.638820 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:31.436985 systemd-networkd[1095]: lxc_health: Link UP Sep 9 00:38:31.448530 systemd-networkd[1095]: lxc_health: Gained carrier Sep 9 00:38:31.448966 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:38:32.090966 kubelet[2077]: E0909 00:38:32.090484 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:32.610141 systemd-networkd[1095]: lxc_health: Gained IPv6LL Sep 9 00:38:32.640384 kubelet[2077]: E0909 00:38:32.639704 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:33.317956 kubelet[2077]: E0909 00:38:33.317917 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:34.319613 kubelet[2077]: E0909 00:38:34.319584 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:38:35.157903 systemd[1]: run-containerd-runc-k8s.io-95821ca89b450d963c0b4b50a7ccd711e13ecb7415311c306a3a37acc22e1fff-runc.0UeYpf.mount: Deactivated successfully. Sep 9 00:38:35.205599 kubelet[2077]: E0909 00:38:35.205132 2077 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:36092->127.0.0.1:38671: read tcp 127.0.0.1:36092->127.0.0.1:38671: read: connection reset by peer Sep 9 00:38:37.268396 systemd[1]: run-containerd-runc-k8s.io-95821ca89b450d963c0b4b50a7ccd711e13ecb7415311c306a3a37acc22e1fff-runc.D0nOE4.mount: Deactivated successfully. Sep 9 00:38:37.337659 sshd[3883]: pam_unix(sshd:session): session closed for user core Sep 9 00:38:37.339969 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:57232.service: Deactivated successfully. Sep 9 00:38:37.340887 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:38:37.340907 systemd-logind[1303]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:38:37.341827 systemd-logind[1303]: Removed session 24.