Oct 31 00:52:11.697284 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 31 00:52:11.697305 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu Oct 30 23:38:01 -00 2025 Oct 31 00:52:11.697313 kernel: efi: EFI v2.70 by EDK II Oct 31 00:52:11.697318 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 31 00:52:11.697323 kernel: random: crng init done Oct 31 00:52:11.697329 kernel: ACPI: Early table checksum verification disabled Oct 31 00:52:11.697335 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 31 00:52:11.697341 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 31 00:52:11.697347 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697352 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697357 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697363 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697368 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697373 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697382 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697388 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697394 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:52:11.697400 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 31 00:52:11.697407 kernel: NUMA: Failed to initialise from firmware Oct 31 00:52:11.697413 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 00:52:11.697419 kernel: NUMA: NODE_DATA [mem 0xdcb08900-0xdcb0dfff] Oct 31 00:52:11.697425 kernel: Zone ranges: Oct 31 00:52:11.697431 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 00:52:11.697439 kernel: DMA32 empty Oct 31 00:52:11.697444 kernel: Normal empty Oct 31 00:52:11.697450 kernel: Movable zone start for each node Oct 31 00:52:11.697456 kernel: Early memory node ranges Oct 31 00:52:11.697461 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 31 00:52:11.697467 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 31 00:52:11.697473 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 31 00:52:11.697479 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 31 00:52:11.697485 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 31 00:52:11.697490 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 31 00:52:11.697496 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 31 00:52:11.697502 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 31 00:52:11.697509 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 31 00:52:11.697514 kernel: psci: probing for conduit method from ACPI. Oct 31 00:52:11.697520 kernel: psci: PSCIv1.1 detected in firmware. Oct 31 00:52:11.697525 kernel: psci: Using standard PSCI v0.2 function IDs Oct 31 00:52:11.697531 kernel: psci: Trusted OS migration not required Oct 31 00:52:11.697540 kernel: psci: SMC Calling Convention v1.1 Oct 31 00:52:11.697546 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 31 00:52:11.697553 kernel: ACPI: SRAT not present Oct 31 00:52:11.697559 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Oct 31 00:52:11.697565 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Oct 31 00:52:11.697572 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 31 00:52:11.697578 kernel: Detected PIPT I-cache on CPU0 Oct 31 00:52:11.697584 kernel: CPU features: detected: GIC system register CPU interface Oct 31 00:52:11.697590 kernel: CPU features: detected: Hardware dirty bit management Oct 31 00:52:11.697596 kernel: CPU features: detected: Spectre-v4 Oct 31 00:52:11.697602 kernel: CPU features: detected: Spectre-BHB Oct 31 00:52:11.697609 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 31 00:52:11.697615 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 31 00:52:11.697621 kernel: CPU features: detected: ARM erratum 1418040 Oct 31 00:52:11.697627 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 31 00:52:11.697633 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 31 00:52:11.697639 kernel: Policy zone: DMA Oct 31 00:52:11.697646 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c54831d8f121b00ec4768e5b1793fd4b2eb83931891a70a1aede21bf2f1a9635 Oct 31 00:52:11.697653 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 31 00:52:11.697659 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 00:52:11.697665 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:52:11.697671 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:52:11.697678 kernel: Memory: 2457328K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114960K reserved, 0K cma-reserved) Oct 31 00:52:11.697685 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 00:52:11.697691 kernel: trace event string verifier disabled Oct 31 00:52:11.697696 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:52:11.697703 kernel: rcu: RCU event tracing is enabled. Oct 31 00:52:11.697709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 00:52:11.697715 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:52:11.697721 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:52:11.697727 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:52:11.697734 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 00:52:11.697740 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 31 00:52:11.697747 kernel: GICv3: 256 SPIs implemented Oct 31 00:52:11.697753 kernel: GICv3: 0 Extended SPIs implemented Oct 31 00:52:11.697759 kernel: GICv3: Distributor has no Range Selector support Oct 31 00:52:11.697765 kernel: Root IRQ handler: gic_handle_irq Oct 31 00:52:11.697771 kernel: GICv3: 16 PPIs implemented Oct 31 00:52:11.697777 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 31 00:52:11.697785 kernel: ACPI: SRAT not present Oct 31 00:52:11.697791 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 31 00:52:11.697797 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 31 00:52:11.697803 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 31 00:52:11.697809 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 31 00:52:11.697815 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 31 00:52:11.697823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:52:11.697829 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 31 00:52:11.697835 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 31 00:52:11.697841 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 31 00:52:11.697847 kernel: arm-pv: using stolen time PV Oct 31 00:52:11.697853 kernel: Console: colour dummy device 80x25 Oct 31 00:52:11.697860 kernel: ACPI: Core revision 20210730 Oct 31 00:52:11.697866 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 31 00:52:11.697872 kernel: pid_max: default: 32768 minimum: 301 Oct 31 00:52:11.697879 kernel: LSM: Security Framework initializing Oct 31 00:52:11.697886 kernel: SELinux: Initializing. Oct 31 00:52:11.697893 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:52:11.697899 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:52:11.697906 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:52:11.697912 kernel: Platform MSI: ITS@0x8080000 domain created Oct 31 00:52:11.697918 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 31 00:52:11.697924 kernel: Remapping and enabling EFI services. Oct 31 00:52:11.697931 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:52:11.697937 kernel: Detected PIPT I-cache on CPU1 Oct 31 00:52:11.697945 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 31 00:52:11.697951 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 31 00:52:11.697957 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:52:11.697963 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 31 00:52:11.697970 kernel: Detected PIPT I-cache on CPU2 Oct 31 00:52:11.697976 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 31 00:52:11.697982 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 31 00:52:11.697989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:52:11.697995 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 31 00:52:11.698001 kernel: Detected PIPT I-cache on CPU3 Oct 31 00:52:11.698008 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 31 00:52:11.698031 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 31 00:52:11.698038 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 31 00:52:11.698044 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 31 00:52:11.698054 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 00:52:11.698062 kernel: SMP: Total of 4 processors activated. Oct 31 00:52:11.698069 kernel: CPU features: detected: 32-bit EL0 Support Oct 31 00:52:11.698075 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 31 00:52:11.698082 kernel: CPU features: detected: Common not Private translations Oct 31 00:52:11.698088 kernel: CPU features: detected: CRC32 instructions Oct 31 00:52:11.698094 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 31 00:52:11.698101 kernel: CPU features: detected: LSE atomic instructions Oct 31 00:52:11.698109 kernel: CPU features: detected: Privileged Access Never Oct 31 00:52:11.698115 kernel: CPU features: detected: RAS Extension Support Oct 31 00:52:11.698122 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 31 00:52:11.698129 kernel: CPU: All CPU(s) started at EL1 Oct 31 00:52:11.698136 kernel: alternatives: patching kernel code Oct 31 00:52:11.698144 kernel: devtmpfs: initialized Oct 31 00:52:11.698150 kernel: KASLR enabled Oct 31 00:52:11.698157 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:52:11.698170 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 00:52:11.698177 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:52:11.698184 kernel: SMBIOS 3.0.0 present. Oct 31 00:52:11.698191 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 31 00:52:11.698197 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:52:11.698204 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 31 00:52:11.698213 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 31 00:52:11.698219 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 31 00:52:11.698226 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:52:11.698233 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 Oct 31 00:52:11.698239 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:52:11.698246 kernel: cpuidle: using governor menu Oct 31 00:52:11.698252 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 31 00:52:11.698258 kernel: ASID allocator initialised with 32768 entries Oct 31 00:52:11.698265 kernel: ACPI: bus type PCI registered Oct 31 00:52:11.698273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:52:11.698279 kernel: Serial: AMBA PL011 UART driver Oct 31 00:52:11.698286 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:52:11.698293 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 31 00:52:11.698299 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:52:11.698306 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 31 00:52:11.698313 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:52:11.698319 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 31 00:52:11.698326 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:52:11.698333 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:52:11.698340 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:52:11.698346 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 31 00:52:11.698353 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 31 00:52:11.698359 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 31 00:52:11.698366 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:52:11.698372 kernel: ACPI: Interpreter enabled Oct 31 00:52:11.698379 kernel: ACPI: Using GIC for interrupt routing Oct 31 00:52:11.698385 kernel: ACPI: MCFG table detected, 1 entries Oct 31 00:52:11.698393 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 31 00:52:11.698400 kernel: printk: console [ttyAMA0] enabled Oct 31 00:52:11.698406 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 00:52:11.698532 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:52:11.698599 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 00:52:11.698658 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 00:52:11.698719 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 31 00:52:11.698791 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 31 00:52:11.698800 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 31 00:52:11.698807 kernel: PCI host bridge to bus 0000:00 Oct 31 00:52:11.698878 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 31 00:52:11.698935 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 31 00:52:11.698990 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 31 00:52:11.699071 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 00:52:11.699153 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 31 00:52:11.699237 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 00:52:11.699299 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 31 00:52:11.699360 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 31 00:52:11.699420 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 31 00:52:11.699481 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 31 00:52:11.699543 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 31 00:52:11.699606 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 31 00:52:11.699660 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 31 00:52:11.699713 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 31 00:52:11.699768 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 31 00:52:11.699777 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 31 00:52:11.699783 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 31 00:52:11.699790 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 31 00:52:11.699799 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 31 00:52:11.699805 kernel: iommu: Default domain type: Translated Oct 31 00:52:11.699813 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 31 00:52:11.699820 kernel: vgaarb: loaded Oct 31 00:52:11.699826 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 31 00:52:11.699833 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 31 00:52:11.699839 kernel: PTP clock support registered Oct 31 00:52:11.699846 kernel: Registered efivars operations Oct 31 00:52:11.699852 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 31 00:52:11.699859 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:52:11.699868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:52:11.699874 kernel: pnp: PnP ACPI init Oct 31 00:52:11.699955 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 31 00:52:11.699966 kernel: pnp: PnP ACPI: found 1 devices Oct 31 00:52:11.699973 kernel: NET: Registered PF_INET protocol family Oct 31 00:52:11.699980 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 00:52:11.699987 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 00:52:11.699993 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:52:11.700002 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:52:11.700009 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 31 00:52:11.700022 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 00:52:11.700029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:52:11.700036 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:52:11.700043 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:52:11.700049 kernel: PCI: CLS 0 bytes, default 64 Oct 31 00:52:11.700056 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 31 00:52:11.700063 kernel: kvm [1]: HYP mode not available Oct 31 00:52:11.700071 kernel: Initialise system trusted keyrings Oct 31 00:52:11.700078 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 00:52:11.700085 kernel: Key type asymmetric registered Oct 31 00:52:11.700092 kernel: Asymmetric key parser 'x509' registered Oct 31 00:52:11.700099 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 00:52:11.700106 kernel: io scheduler mq-deadline registered Oct 31 00:52:11.700113 kernel: io scheduler kyber registered Oct 31 00:52:11.700119 kernel: io scheduler bfq registered Oct 31 00:52:11.700126 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 31 00:52:11.700134 kernel: ACPI: button: Power Button [PWRB] Oct 31 00:52:11.700142 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 31 00:52:11.700216 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 31 00:52:11.700226 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:52:11.700233 kernel: thunder_xcv, ver 1.0 Oct 31 00:52:11.700240 kernel: thunder_bgx, ver 1.0 Oct 31 00:52:11.700247 kernel: nicpf, ver 1.0 Oct 31 00:52:11.700253 kernel: nicvf, ver 1.0 Oct 31 00:52:11.700327 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 31 00:52:11.700387 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-31T00:52:11 UTC (1761871931) Oct 31 00:52:11.700396 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 00:52:11.700403 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:52:11.700410 kernel: Segment Routing with IPv6 Oct 31 00:52:11.700417 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:52:11.700424 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:52:11.700431 kernel: Key type dns_resolver registered Oct 31 00:52:11.700437 kernel: registered taskstats version 1 Oct 31 00:52:11.700445 kernel: Loading compiled-in X.509 certificates Oct 31 00:52:11.700452 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: e62237f95ba4ddc0e942e4538fe1019cd3c2f62a' Oct 31 00:52:11.700459 kernel: Key type .fscrypt registered Oct 31 00:52:11.700465 kernel: Key type fscrypt-provisioning registered Oct 31 00:52:11.700472 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:52:11.700479 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:52:11.700485 kernel: ima: No architecture policies found Oct 31 00:52:11.700493 kernel: clk: Disabling unused clocks Oct 31 00:52:11.700499 kernel: Freeing unused kernel memory: 36416K Oct 31 00:52:11.700507 kernel: Run /init as init process Oct 31 00:52:11.700536 kernel: with arguments: Oct 31 00:52:11.700543 kernel: /init Oct 31 00:52:11.700550 kernel: with environment: Oct 31 00:52:11.701756 kernel: HOME=/ Oct 31 00:52:11.701771 kernel: TERM=linux Oct 31 00:52:11.701778 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 31 00:52:11.701788 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 00:52:11.701803 systemd[1]: Detected virtualization kvm. Oct 31 00:52:11.701811 systemd[1]: Detected architecture arm64. Oct 31 00:52:11.701818 systemd[1]: Running in initrd. Oct 31 00:52:11.701825 systemd[1]: No hostname configured, using default hostname. Oct 31 00:52:11.701832 systemd[1]: Hostname set to . Oct 31 00:52:11.701839 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:52:11.701846 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:52:11.701854 systemd[1]: Started systemd-ask-password-console.path. Oct 31 00:52:11.701863 systemd[1]: Reached target cryptsetup.target. Oct 31 00:52:11.701870 systemd[1]: Reached target paths.target. Oct 31 00:52:11.701877 systemd[1]: Reached target slices.target. Oct 31 00:52:11.701884 systemd[1]: Reached target swap.target. Oct 31 00:52:11.701891 systemd[1]: Reached target timers.target. Oct 31 00:52:11.701898 systemd[1]: Listening on iscsid.socket. Oct 31 00:52:11.701905 systemd[1]: Listening on iscsiuio.socket. Oct 31 00:52:11.701915 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 00:52:11.701922 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 00:52:11.701929 systemd[1]: Listening on systemd-journald.socket. Oct 31 00:52:11.701937 systemd[1]: Listening on systemd-networkd.socket. Oct 31 00:52:11.701945 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 00:52:11.701952 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 00:52:11.701959 systemd[1]: Reached target sockets.target. Oct 31 00:52:11.701967 systemd[1]: Starting kmod-static-nodes.service... Oct 31 00:52:11.701974 systemd[1]: Finished network-cleanup.service. Oct 31 00:52:11.701982 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:52:11.701989 systemd[1]: Starting systemd-journald.service... Oct 31 00:52:11.701996 systemd[1]: Starting systemd-modules-load.service... Oct 31 00:52:11.702003 systemd[1]: Starting systemd-resolved.service... Oct 31 00:52:11.702011 systemd[1]: Starting systemd-vconsole-setup.service... Oct 31 00:52:11.702054 systemd[1]: Finished kmod-static-nodes.service. Oct 31 00:52:11.702061 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:52:11.702071 kernel: audit: type=1130 audit(1761871931.697:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.702078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 00:52:11.702092 systemd-journald[290]: Journal started Oct 31 00:52:11.702150 systemd-journald[290]: Runtime Journal (/run/log/journal/2bc353da700342f3909b3d486dd154a7) is 6.0M, max 48.7M, 42.6M free. Oct 31 00:52:11.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.701379 systemd-modules-load[291]: Inserted module 'overlay' Oct 31 00:52:11.705081 systemd[1]: Started systemd-journald.service. Oct 31 00:52:11.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.709850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 00:52:11.711630 kernel: audit: type=1130 audit(1761871931.705:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.711089 systemd[1]: Finished systemd-vconsole-setup.service. Oct 31 00:52:11.718095 kernel: audit: type=1130 audit(1761871931.710:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.718144 kernel: audit: type=1130 audit(1761871931.712:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.713261 systemd[1]: Starting dracut-cmdline-ask.service... Oct 31 00:52:11.726029 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:52:11.727610 systemd-resolved[292]: Positive Trust Anchors: Oct 31 00:52:11.727628 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:52:11.727656 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 00:52:11.732088 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 31 00:52:11.740277 kernel: Bridge firewalling registered Oct 31 00:52:11.740300 kernel: audit: type=1130 audit(1761871931.737:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.733199 systemd[1]: Started systemd-resolved.service. Oct 31 00:52:11.744005 kernel: audit: type=1130 audit(1761871931.740:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.736357 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 31 00:52:11.737429 systemd[1]: Finished dracut-cmdline-ask.service. Oct 31 00:52:11.741128 systemd[1]: Reached target nss-lookup.target. Oct 31 00:52:11.745558 systemd[1]: Starting dracut-cmdline.service... Oct 31 00:52:11.750038 kernel: SCSI subsystem initialized Oct 31 00:52:11.755211 dracut-cmdline[307]: dracut-dracut-053 Oct 31 00:52:11.757745 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c54831d8f121b00ec4768e5b1793fd4b2eb83931891a70a1aede21bf2f1a9635 Oct 31 00:52:11.764199 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:52:11.764225 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:52:11.764235 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 31 00:52:11.764420 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 31 00:52:11.765224 systemd[1]: Finished systemd-modules-load.service. Oct 31 00:52:11.766865 systemd[1]: Starting systemd-sysctl.service... Oct 31 00:52:11.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.771037 kernel: audit: type=1130 audit(1761871931.766:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.777041 systemd[1]: Finished systemd-sysctl.service. Oct 31 00:52:11.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.781034 kernel: audit: type=1130 audit(1761871931.777:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.824038 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:52:11.837052 kernel: iscsi: registered transport (tcp) Oct 31 00:52:11.852040 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:52:11.852067 kernel: QLogic iSCSI HBA Driver Oct 31 00:52:11.887874 systemd[1]: Finished dracut-cmdline.service. Oct 31 00:52:11.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.889686 systemd[1]: Starting dracut-pre-udev.service... Oct 31 00:52:11.893079 kernel: audit: type=1130 audit(1761871931.888:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:11.933048 kernel: raid6: neonx8 gen() 13734 MB/s Oct 31 00:52:11.950043 kernel: raid6: neonx8 xor() 10774 MB/s Oct 31 00:52:11.967041 kernel: raid6: neonx4 gen() 13457 MB/s Oct 31 00:52:11.984038 kernel: raid6: neonx4 xor() 11118 MB/s Oct 31 00:52:12.001036 kernel: raid6: neonx2 gen() 12934 MB/s Oct 31 00:52:12.018035 kernel: raid6: neonx2 xor() 10277 MB/s Oct 31 00:52:12.035036 kernel: raid6: neonx1 gen() 10577 MB/s Oct 31 00:52:12.052041 kernel: raid6: neonx1 xor() 8780 MB/s Oct 31 00:52:12.069041 kernel: raid6: int64x8 gen() 6259 MB/s Oct 31 00:52:12.086039 kernel: raid6: int64x8 xor() 3532 MB/s Oct 31 00:52:12.103037 kernel: raid6: int64x4 gen() 7226 MB/s Oct 31 00:52:12.120036 kernel: raid6: int64x4 xor() 3848 MB/s Oct 31 00:52:12.137037 kernel: raid6: int64x2 gen() 6146 MB/s Oct 31 00:52:12.154042 kernel: raid6: int64x2 xor() 3316 MB/s Oct 31 00:52:12.171039 kernel: raid6: int64x1 gen() 5040 MB/s Oct 31 00:52:12.188210 kernel: raid6: int64x1 xor() 2642 MB/s Oct 31 00:52:12.188222 kernel: raid6: using algorithm neonx8 gen() 13734 MB/s Oct 31 00:52:12.188231 kernel: raid6: .... xor() 10774 MB/s, rmw enabled Oct 31 00:52:12.189312 kernel: raid6: using neon recovery algorithm Oct 31 00:52:12.200200 kernel: xor: measuring software checksum speed Oct 31 00:52:12.200235 kernel: 8regs : 17202 MB/sec Oct 31 00:52:12.201464 kernel: 32regs : 20665 MB/sec Oct 31 00:52:12.201477 kernel: arm64_neon : 27304 MB/sec Oct 31 00:52:12.201485 kernel: xor: using function: arm64_neon (27304 MB/sec) Oct 31 00:52:12.256048 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 31 00:52:12.267776 systemd[1]: Finished dracut-pre-udev.service. Oct 31 00:52:12.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:12.269000 audit: BPF prog-id=7 op=LOAD Oct 31 00:52:12.269000 audit: BPF prog-id=8 op=LOAD Oct 31 00:52:12.269776 systemd[1]: Starting systemd-udevd.service... Oct 31 00:52:12.282728 systemd-udevd[490]: Using default interface naming scheme 'v252'. Oct 31 00:52:12.286073 systemd[1]: Started systemd-udevd.service. Oct 31 00:52:12.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:12.290576 systemd[1]: Starting dracut-pre-trigger.service... Oct 31 00:52:12.301941 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Oct 31 00:52:12.331517 systemd[1]: Finished dracut-pre-trigger.service. Oct 31 00:52:12.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:12.333246 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 00:52:12.370955 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 00:52:12.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:12.405250 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 00:52:12.414458 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 00:52:12.414482 kernel: GPT:9289727 != 19775487 Oct 31 00:52:12.414491 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 00:52:12.414500 kernel: GPT:9289727 != 19775487 Oct 31 00:52:12.414508 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 00:52:12.414515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:52:12.428041 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (553) Oct 31 00:52:12.431733 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 31 00:52:12.435220 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 31 00:52:12.440039 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 31 00:52:12.441073 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 31 00:52:12.445639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 00:52:12.447470 systemd[1]: Starting disk-uuid.service... Oct 31 00:52:12.453660 disk-uuid[560]: Primary Header is updated. Oct 31 00:52:12.453660 disk-uuid[560]: Secondary Entries is updated. Oct 31 00:52:12.453660 disk-uuid[560]: Secondary Header is updated. Oct 31 00:52:12.456952 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:52:13.463644 disk-uuid[561]: The operation has completed successfully. Oct 31 00:52:13.464790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:52:13.488497 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:52:13.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.488592 systemd[1]: Finished disk-uuid.service. Oct 31 00:52:13.490230 systemd[1]: Starting verity-setup.service... Oct 31 00:52:13.505050 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 31 00:52:13.525732 systemd[1]: Found device dev-mapper-usr.device. Oct 31 00:52:13.528061 systemd[1]: Mounting sysusr-usr.mount... Oct 31 00:52:13.529838 systemd[1]: Finished verity-setup.service. Oct 31 00:52:13.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.574888 systemd[1]: Mounted sysusr-usr.mount. Oct 31 00:52:13.576324 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 31 00:52:13.575821 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 31 00:52:13.576620 systemd[1]: Starting ignition-setup.service... Oct 31 00:52:13.578928 systemd[1]: Starting parse-ip-for-networkd.service... Oct 31 00:52:13.585138 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 00:52:13.585183 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:52:13.585193 kernel: BTRFS info (device vda6): has skinny extents Oct 31 00:52:13.594185 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:52:13.601484 systemd[1]: Finished ignition-setup.service. Oct 31 00:52:13.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.603223 systemd[1]: Starting ignition-fetch-offline.service... Oct 31 00:52:13.650513 ignition[646]: Ignition 2.14.0 Oct 31 00:52:13.650523 ignition[646]: Stage: fetch-offline Oct 31 00:52:13.650561 ignition[646]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:52:13.650571 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:52:13.650701 ignition[646]: parsed url from cmdline: "" Oct 31 00:52:13.650704 ignition[646]: no config URL provided Oct 31 00:52:13.650708 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:52:13.650715 ignition[646]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:52:13.650735 ignition[646]: op(1): [started] loading QEMU firmware config module Oct 31 00:52:13.650739 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 00:52:13.658715 ignition[646]: op(1): [finished] loading QEMU firmware config module Oct 31 00:52:13.675515 systemd[1]: Finished parse-ip-for-networkd.service. Oct 31 00:52:13.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.677000 audit: BPF prog-id=9 op=LOAD Oct 31 00:52:13.677826 systemd[1]: Starting systemd-networkd.service... Oct 31 00:52:13.696866 systemd-networkd[738]: lo: Link UP Oct 31 00:52:13.696880 systemd-networkd[738]: lo: Gained carrier Oct 31 00:52:13.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.697549 systemd-networkd[738]: Enumeration completed Oct 31 00:52:13.697652 systemd[1]: Started systemd-networkd.service. Oct 31 00:52:13.697929 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:52:13.698793 systemd[1]: Reached target network.target. Oct 31 00:52:13.699480 systemd-networkd[738]: eth0: Link UP Oct 31 00:52:13.699484 systemd-networkd[738]: eth0: Gained carrier Oct 31 00:52:13.700989 systemd[1]: Starting iscsiuio.service... Oct 31 00:52:13.708160 systemd[1]: Started iscsiuio.service. Oct 31 00:52:13.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.709758 systemd[1]: Starting iscsid.service... Oct 31 00:52:13.713416 iscsid[743]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 31 00:52:13.713416 iscsid[743]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 31 00:52:13.713416 iscsid[743]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 31 00:52:13.713416 iscsid[743]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 31 00:52:13.713416 iscsid[743]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 31 00:52:13.713416 iscsid[743]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 31 00:52:13.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.713614 ignition[646]: parsing config with SHA512: 4717af928aeaa8cb439047c76e8b950fbd6f906d584ce99473657e9012c93191fa5be8bde26f20789aea94bc3209954e975637a2777605dd5157723b36611abf Oct 31 00:52:13.716526 systemd[1]: Started iscsid.service. Oct 31 00:52:13.722231 systemd[1]: Starting dracut-initqueue.service... Oct 31 00:52:13.728345 ignition[646]: fetch-offline: fetch-offline passed Oct 31 00:52:13.724254 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:52:13.728399 ignition[646]: Ignition finished successfully Oct 31 00:52:13.727811 unknown[646]: fetched base config from "system" Oct 31 00:52:13.727818 unknown[646]: fetched user config from "qemu" Oct 31 00:52:13.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.732921 systemd[1]: Finished dracut-initqueue.service. Oct 31 00:52:13.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.734109 systemd[1]: Finished ignition-fetch-offline.service. Oct 31 00:52:13.735587 systemd[1]: Reached target remote-fs-pre.target. Oct 31 00:52:13.736859 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 00:52:13.738379 systemd[1]: Reached target remote-fs.target. Oct 31 00:52:13.740558 systemd[1]: Starting dracut-pre-mount.service... Oct 31 00:52:13.741734 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:52:13.742581 systemd[1]: Starting ignition-kargs.service... Oct 31 00:52:13.748621 systemd[1]: Finished dracut-pre-mount.service. Oct 31 00:52:13.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.752077 ignition[754]: Ignition 2.14.0 Oct 31 00:52:13.752087 ignition[754]: Stage: kargs Oct 31 00:52:13.752196 ignition[754]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:52:13.752206 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:52:13.753355 ignition[754]: kargs: kargs passed Oct 31 00:52:13.755526 systemd[1]: Finished ignition-kargs.service. Oct 31 00:52:13.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.753401 ignition[754]: Ignition finished successfully Oct 31 00:52:13.757616 systemd[1]: Starting ignition-disks.service... Oct 31 00:52:13.764326 ignition[764]: Ignition 2.14.0 Oct 31 00:52:13.764336 ignition[764]: Stage: disks Oct 31 00:52:13.764440 ignition[764]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:52:13.766361 systemd[1]: Finished ignition-disks.service. Oct 31 00:52:13.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.764450 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:52:13.767927 systemd[1]: Reached target initrd-root-device.target. Oct 31 00:52:13.765360 ignition[764]: disks: disks passed Oct 31 00:52:13.769310 systemd[1]: Reached target local-fs-pre.target. Oct 31 00:52:13.765407 ignition[764]: Ignition finished successfully Oct 31 00:52:13.770972 systemd[1]: Reached target local-fs.target. Oct 31 00:52:13.772395 systemd[1]: Reached target sysinit.target. Oct 31 00:52:13.773547 systemd[1]: Reached target basic.target. Oct 31 00:52:13.775725 systemd[1]: Starting systemd-fsck-root.service... Oct 31 00:52:13.787192 systemd-fsck[772]: ROOT: clean, 637/553520 files, 56031/553472 blocks Oct 31 00:52:13.790893 systemd[1]: Finished systemd-fsck-root.service. Oct 31 00:52:13.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.792851 systemd[1]: Mounting sysroot.mount... Oct 31 00:52:13.798884 systemd[1]: Mounted sysroot.mount. Oct 31 00:52:13.800174 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 31 00:52:13.799698 systemd[1]: Reached target initrd-root-fs.target. Oct 31 00:52:13.802003 systemd[1]: Mounting sysroot-usr.mount... Oct 31 00:52:13.803492 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 31 00:52:13.803542 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:52:13.803566 systemd[1]: Reached target ignition-diskful.target. Oct 31 00:52:13.806568 systemd[1]: Mounted sysroot-usr.mount. Oct 31 00:52:13.808818 systemd[1]: Starting initrd-setup-root.service... Oct 31 00:52:13.813480 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:52:13.817467 initrd-setup-root[790]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:52:13.821967 initrd-setup-root[798]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:52:13.825121 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:52:13.851353 systemd[1]: Finished initrd-setup-root.service. Oct 31 00:52:13.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.853058 systemd[1]: Starting ignition-mount.service... Oct 31 00:52:13.854494 systemd[1]: Starting sysroot-boot.service... Oct 31 00:52:13.859070 bash[823]: umount: /sysroot/usr/share/oem: not mounted. Oct 31 00:52:13.868411 ignition[825]: INFO : Ignition 2.14.0 Oct 31 00:52:13.868411 ignition[825]: INFO : Stage: mount Oct 31 00:52:13.869966 ignition[825]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:52:13.869966 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:52:13.869966 ignition[825]: INFO : mount: mount passed Oct 31 00:52:13.869966 ignition[825]: INFO : Ignition finished successfully Oct 31 00:52:13.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:13.871405 systemd[1]: Finished ignition-mount.service. Oct 31 00:52:13.873410 systemd[1]: Finished sysroot-boot.service. Oct 31 00:52:14.538603 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 31 00:52:14.545620 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (833) Oct 31 00:52:14.545656 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 31 00:52:14.545666 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:52:14.547020 kernel: BTRFS info (device vda6): has skinny extents Oct 31 00:52:14.549678 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 31 00:52:14.551250 systemd[1]: Starting ignition-files.service... Oct 31 00:52:14.564909 ignition[853]: INFO : Ignition 2.14.0 Oct 31 00:52:14.564909 ignition[853]: INFO : Stage: files Oct 31 00:52:14.566631 ignition[853]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:52:14.566631 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:52:14.566631 ignition[853]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:52:14.570896 ignition[853]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:52:14.570896 ignition[853]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:52:14.574772 ignition[853]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:52:14.576230 ignition[853]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:52:14.577792 unknown[853]: wrote ssh authorized keys file for user: core Oct 31 00:52:14.579045 ignition[853]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:52:14.579045 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 00:52:14.579045 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 00:52:14.579045 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 31 00:52:14.579045 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 31 00:52:14.647303 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 31 00:52:14.867372 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 31 00:52:14.869428 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 31 00:52:14.869428 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 31 00:52:15.102250 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 31 00:52:15.209695 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 31 00:52:15.209695 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:52:15.213931 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 31 00:52:15.479719 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 31 00:52:15.573597 systemd-networkd[738]: eth0: Gained IPv6LL Oct 31 00:52:15.781323 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 31 00:52:15.781323 ignition[853]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:52:15.792146 ignition[853]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:52:15.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.815692 systemd[1]: Finished ignition-files.service. Oct 31 00:52:15.834727 ignition[853]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:52:15.834727 ignition[853]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:52:15.834727 ignition[853]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:52:15.834727 ignition[853]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:52:15.834727 ignition[853]: INFO : files: files passed Oct 31 00:52:15.834727 ignition[853]: INFO : Ignition finished successfully Oct 31 00:52:15.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.818145 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 31 00:52:15.819468 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 31 00:52:15.847566 initrd-setup-root-after-ignition[878]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 31 00:52:15.820121 systemd[1]: Starting ignition-quench.service... Oct 31 00:52:15.850335 initrd-setup-root-after-ignition[880]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:52:15.825035 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 31 00:52:15.826277 systemd[1]: Reached target ignition-complete.target. Oct 31 00:52:15.828818 systemd[1]: Starting initrd-parse-etc.service... Oct 31 00:52:15.830360 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:52:15.830452 systemd[1]: Finished ignition-quench.service. Oct 31 00:52:15.841759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:52:15.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.841852 systemd[1]: Finished initrd-parse-etc.service. Oct 31 00:52:15.843026 systemd[1]: Reached target initrd-fs.target. Oct 31 00:52:15.844424 systemd[1]: Reached target initrd.target. Oct 31 00:52:15.845847 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 31 00:52:15.846571 systemd[1]: Starting dracut-pre-pivot.service... Oct 31 00:52:15.856497 systemd[1]: Finished dracut-pre-pivot.service. Oct 31 00:52:15.858451 systemd[1]: Starting initrd-cleanup.service... Oct 31 00:52:15.866419 systemd[1]: Stopped target nss-lookup.target. Oct 31 00:52:15.867287 systemd[1]: Stopped target remote-cryptsetup.target. Oct 31 00:52:15.868689 systemd[1]: Stopped target timers.target. Oct 31 00:52:15.869969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:52:15.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.870101 systemd[1]: Stopped dracut-pre-pivot.service. Oct 31 00:52:15.871373 systemd[1]: Stopped target initrd.target. Oct 31 00:52:15.872701 systemd[1]: Stopped target basic.target. Oct 31 00:52:15.873936 systemd[1]: Stopped target ignition-complete.target. Oct 31 00:52:15.875317 systemd[1]: Stopped target ignition-diskful.target. Oct 31 00:52:15.876626 systemd[1]: Stopped target initrd-root-device.target. Oct 31 00:52:15.878115 systemd[1]: Stopped target remote-fs.target. Oct 31 00:52:15.879555 systemd[1]: Stopped target remote-fs-pre.target. Oct 31 00:52:15.880961 systemd[1]: Stopped target sysinit.target. Oct 31 00:52:15.882293 systemd[1]: Stopped target local-fs.target. Oct 31 00:52:15.883723 systemd[1]: Stopped target local-fs-pre.target. Oct 31 00:52:15.885173 systemd[1]: Stopped target swap.target. Oct 31 00:52:15.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.886406 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:52:15.886521 systemd[1]: Stopped dracut-pre-mount.service. Oct 31 00:52:15.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.887852 systemd[1]: Stopped target cryptsetup.target. Oct 31 00:52:15.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.888952 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:52:15.889075 systemd[1]: Stopped dracut-initqueue.service. Oct 31 00:52:15.890675 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:52:15.890789 systemd[1]: Stopped ignition-fetch-offline.service. Oct 31 00:52:15.892113 systemd[1]: Stopped target paths.target. Oct 31 00:52:15.893384 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:52:15.897055 systemd[1]: Stopped systemd-ask-password-console.path. Oct 31 00:52:15.898608 systemd[1]: Stopped target slices.target. Oct 31 00:52:15.900280 systemd[1]: Stopped target sockets.target. Oct 31 00:52:15.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.901833 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:52:15.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.901956 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 31 00:52:15.907552 iscsid[743]: iscsid shutting down. Oct 31 00:52:15.903464 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:52:15.903559 systemd[1]: Stopped ignition-files.service. Oct 31 00:52:15.905816 systemd[1]: Stopping ignition-mount.service... Oct 31 00:52:15.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.908918 systemd[1]: Stopping iscsid.service... Oct 31 00:52:15.909860 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:52:15.914619 ignition[894]: INFO : Ignition 2.14.0 Oct 31 00:52:15.914619 ignition[894]: INFO : Stage: umount Oct 31 00:52:15.914619 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:52:15.914619 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:52:15.914619 ignition[894]: INFO : umount: umount passed Oct 31 00:52:15.914619 ignition[894]: INFO : Ignition finished successfully Oct 31 00:52:15.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.909991 systemd[1]: Stopped kmod-static-nodes.service. Oct 31 00:52:15.912177 systemd[1]: Stopping sysroot-boot.service... Oct 31 00:52:15.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.913740 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:52:15.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.913893 systemd[1]: Stopped systemd-udev-trigger.service. Oct 31 00:52:15.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.915594 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:52:15.915683 systemd[1]: Stopped dracut-pre-trigger.service. Oct 31 00:52:15.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.918387 systemd[1]: iscsid.service: Deactivated successfully. Oct 31 00:52:15.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.918481 systemd[1]: Stopped iscsid.service. Oct 31 00:52:15.919753 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:52:15.919827 systemd[1]: Stopped ignition-mount.service. Oct 31 00:52:15.922253 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:52:15.922758 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:52:15.922825 systemd[1]: Closed iscsid.socket. Oct 31 00:52:15.923591 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:52:15.923634 systemd[1]: Stopped ignition-disks.service. Oct 31 00:52:15.925246 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:52:15.925289 systemd[1]: Stopped ignition-kargs.service. Oct 31 00:52:15.926681 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:52:15.926720 systemd[1]: Stopped ignition-setup.service. Oct 31 00:52:15.929099 systemd[1]: Stopping iscsiuio.service... Oct 31 00:52:15.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.932233 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:52:15.932322 systemd[1]: Finished initrd-cleanup.service. Oct 31 00:52:15.933818 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 31 00:52:15.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.933899 systemd[1]: Stopped iscsiuio.service. Oct 31 00:52:15.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.935696 systemd[1]: Stopped target network.target. Oct 31 00:52:15.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.938198 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:52:15.938243 systemd[1]: Closed iscsiuio.socket. Oct 31 00:52:15.939753 systemd[1]: Stopping systemd-networkd.service... Oct 31 00:52:15.941236 systemd[1]: Stopping systemd-resolved.service... Oct 31 00:52:15.950077 systemd-networkd[738]: eth0: DHCPv6 lease lost Oct 31 00:52:15.975397 kernel: kauditd_printk_skb: 52 callbacks suppressed Oct 31 00:52:15.975421 kernel: audit: type=1334 audit(1761871935.969:63): prog-id=9 op=UNLOAD Oct 31 00:52:15.975431 kernel: audit: type=1131 audit(1761871935.972:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.969000 audit: BPF prog-id=9 op=UNLOAD Oct 31 00:52:15.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.951120 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:52:15.979993 kernel: audit: type=1131 audit(1761871935.976:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.951247 systemd[1]: Stopped systemd-networkd.service. Oct 31 00:52:15.984958 kernel: audit: type=1131 audit(1761871935.980:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.984979 kernel: audit: type=1334 audit(1761871935.981:67): prog-id=6 op=UNLOAD Oct 31 00:52:15.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.981000 audit: BPF prog-id=6 op=UNLOAD Oct 31 00:52:15.952951 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:52:15.952983 systemd[1]: Closed systemd-networkd.socket. Oct 31 00:52:15.955817 systemd[1]: Stopping network-cleanup.service... Oct 31 00:52:15.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.957380 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:52:15.996011 kernel: audit: type=1131 audit(1761871935.988:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.996042 kernel: audit: type=1131 audit(1761871935.992:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.957450 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 31 00:52:16.000033 kernel: audit: type=1131 audit(1761871935.996:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.959115 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:52:16.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.959172 systemd[1]: Stopped systemd-sysctl.service. Oct 31 00:52:16.005541 kernel: audit: type=1131 audit(1761871936.000:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.962008 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:52:16.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.962076 systemd[1]: Stopped systemd-modules-load.service. Oct 31 00:52:16.011161 kernel: audit: type=1131 audit(1761871936.006:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:16.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.965548 systemd[1]: Stopping systemd-udevd.service... Oct 31 00:52:16.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:16.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:15.968665 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 31 00:52:15.969199 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:52:15.969316 systemd[1]: Stopped systemd-resolved.service. Oct 31 00:52:15.972798 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:52:15.972886 systemd[1]: Stopped sysroot-boot.service. Oct 31 00:52:15.977010 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:52:15.977150 systemd[1]: Stopped systemd-udevd.service. Oct 31 00:52:15.981705 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:52:15.981750 systemd[1]: Closed systemd-udevd-control.socket. Oct 31 00:52:15.985903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:52:15.985939 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 31 00:52:16.023000 audit: BPF prog-id=8 op=UNLOAD Oct 31 00:52:16.023000 audit: BPF prog-id=7 op=UNLOAD Oct 31 00:52:16.023000 audit: BPF prog-id=5 op=UNLOAD Oct 31 00:52:16.023000 audit: BPF prog-id=4 op=UNLOAD Oct 31 00:52:16.023000 audit: BPF prog-id=3 op=UNLOAD Oct 31 00:52:15.987449 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:52:15.987498 systemd[1]: Stopped dracut-pre-udev.service. Oct 31 00:52:15.988897 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:52:15.988939 systemd[1]: Stopped dracut-cmdline.service. Oct 31 00:52:15.992833 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:52:15.992879 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 31 00:52:15.996916 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:52:15.996958 systemd[1]: Stopped initrd-setup-root.service. Oct 31 00:52:16.001678 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 31 00:52:16.004870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:52:16.004936 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 31 00:52:16.006678 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:52:16.006770 systemd[1]: Stopped network-cleanup.service. Oct 31 00:52:16.010715 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:52:16.010797 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 31 00:52:16.012142 systemd[1]: Reached target initrd-switch-root.target. Oct 31 00:52:16.039826 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Oct 31 00:52:16.014297 systemd[1]: Starting initrd-switch-root.service... Oct 31 00:52:16.020566 systemd[1]: Switching root. Oct 31 00:52:16.041262 systemd-journald[290]: Journal stopped Oct 31 00:52:18.120964 kernel: SELinux: Class mctp_socket not defined in policy. Oct 31 00:52:18.121035 kernel: SELinux: Class anon_inode not defined in policy. Oct 31 00:52:18.121052 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 31 00:52:18.121063 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:52:18.121073 kernel: SELinux: policy capability open_perms=1 Oct 31 00:52:18.121086 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:52:18.121097 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:52:18.121106 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:52:18.121128 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:52:18.121138 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:52:18.121147 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:52:18.121159 systemd[1]: Successfully loaded SELinux policy in 34.771ms. Oct 31 00:52:18.121194 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.269ms. Oct 31 00:52:18.121207 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 00:52:18.121219 systemd[1]: Detected virtualization kvm. Oct 31 00:52:18.121229 systemd[1]: Detected architecture arm64. Oct 31 00:52:18.121239 systemd[1]: Detected first boot. Oct 31 00:52:18.121249 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:52:18.121259 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 31 00:52:18.121269 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:52:18.121280 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:52:18.121292 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:52:18.121303 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:52:18.121314 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:52:18.121324 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 31 00:52:18.121335 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 31 00:52:18.121345 systemd[1]: Created slice system-addon\x2drun.slice. Oct 31 00:52:18.121355 systemd[1]: Created slice system-getty.slice. Oct 31 00:52:18.121366 systemd[1]: Created slice system-modprobe.slice. Oct 31 00:52:18.121377 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 31 00:52:18.121387 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 31 00:52:18.121397 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 31 00:52:18.121408 systemd[1]: Created slice user.slice. Oct 31 00:52:18.121418 systemd[1]: Started systemd-ask-password-console.path. Oct 31 00:52:18.121428 systemd[1]: Started systemd-ask-password-wall.path. Oct 31 00:52:18.121438 systemd[1]: Set up automount boot.automount. Oct 31 00:52:18.121448 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 31 00:52:18.121460 systemd[1]: Reached target integritysetup.target. Oct 31 00:52:18.121470 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 00:52:18.121480 systemd[1]: Reached target remote-fs.target. Oct 31 00:52:18.121490 systemd[1]: Reached target slices.target. Oct 31 00:52:18.121500 systemd[1]: Reached target swap.target. Oct 31 00:52:18.121510 systemd[1]: Reached target torcx.target. Oct 31 00:52:18.121520 systemd[1]: Reached target veritysetup.target. Oct 31 00:52:18.121530 systemd[1]: Listening on systemd-coredump.socket. Oct 31 00:52:18.121541 systemd[1]: Listening on systemd-initctl.socket. Oct 31 00:52:18.121551 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 00:52:18.121562 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 00:52:18.121572 systemd[1]: Listening on systemd-journald.socket. Oct 31 00:52:18.121584 systemd[1]: Listening on systemd-networkd.socket. Oct 31 00:52:18.121594 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 00:52:18.121605 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 00:52:18.121615 systemd[1]: Listening on systemd-userdbd.socket. Oct 31 00:52:18.121625 systemd[1]: Mounting dev-hugepages.mount... Oct 31 00:52:18.121635 systemd[1]: Mounting dev-mqueue.mount... Oct 31 00:52:18.121647 systemd[1]: Mounting media.mount... Oct 31 00:52:18.121657 systemd[1]: Mounting sys-kernel-debug.mount... Oct 31 00:52:18.121667 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 31 00:52:18.121677 systemd[1]: Mounting tmp.mount... Oct 31 00:52:18.121687 systemd[1]: Starting flatcar-tmpfiles.service... Oct 31 00:52:18.121697 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:52:18.121707 systemd[1]: Starting kmod-static-nodes.service... Oct 31 00:52:18.121717 systemd[1]: Starting modprobe@configfs.service... Oct 31 00:52:18.121727 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:52:18.121739 systemd[1]: Starting modprobe@drm.service... Oct 31 00:52:18.121749 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:52:18.121759 systemd[1]: Starting modprobe@fuse.service... Oct 31 00:52:18.121769 systemd[1]: Starting modprobe@loop.service... Oct 31 00:52:18.121779 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:52:18.121790 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 31 00:52:18.121801 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 31 00:52:18.121811 systemd[1]: Starting systemd-journald.service... Oct 31 00:52:18.121825 systemd[1]: Starting systemd-modules-load.service... Oct 31 00:52:18.121836 systemd[1]: Starting systemd-network-generator.service... Oct 31 00:52:18.121846 systemd[1]: Starting systemd-remount-fs.service... Oct 31 00:52:18.121856 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 00:52:18.121866 systemd[1]: Mounted dev-hugepages.mount. Oct 31 00:52:18.121876 systemd[1]: Mounted dev-mqueue.mount. Oct 31 00:52:18.121886 systemd[1]: Mounted media.mount. Oct 31 00:52:18.121896 systemd[1]: Mounted sys-kernel-debug.mount. Oct 31 00:52:18.121909 systemd-journald[1022]: Journal started Oct 31 00:52:18.121951 systemd-journald[1022]: Runtime Journal (/run/log/journal/2bc353da700342f3909b3d486dd154a7) is 6.0M, max 48.7M, 42.6M free. Oct 31 00:52:18.040000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 31 00:52:18.040000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 31 00:52:18.119000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 31 00:52:18.119000 audit[1022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe3dfe890 a2=4000 a3=1 items=0 ppid=1 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:52:18.119000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 31 00:52:18.127133 systemd[1]: Started systemd-journald.service. Oct 31 00:52:18.127334 kernel: fuse: init (API version 7.34) Oct 31 00:52:18.127366 kernel: loop: module loaded Oct 31 00:52:18.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.127265 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 31 00:52:18.128248 systemd[1]: Mounted tmp.mount. Oct 31 00:52:18.129335 systemd[1]: Finished kmod-static-nodes.service. Oct 31 00:52:18.130450 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:52:18.130607 systemd[1]: Finished modprobe@configfs.service. Oct 31 00:52:18.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.131843 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:52:18.131996 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:52:18.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.133335 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:52:18.133481 systemd[1]: Finished modprobe@drm.service. Oct 31 00:52:18.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.134541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:52:18.134756 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:52:18.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.136006 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:52:18.136318 systemd[1]: Finished modprobe@fuse.service. Oct 31 00:52:18.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.137363 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:52:18.137901 systemd[1]: Finished modprobe@loop.service. Oct 31 00:52:18.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.139642 systemd[1]: Finished systemd-modules-load.service. Oct 31 00:52:18.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.140845 systemd[1]: Finished systemd-network-generator.service. Oct 31 00:52:18.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.142164 systemd[1]: Finished systemd-remount-fs.service. Oct 31 00:52:18.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.143476 systemd[1]: Reached target network-pre.target. Oct 31 00:52:18.145568 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 31 00:52:18.147803 systemd[1]: Mounting sys-kernel-config.mount... Oct 31 00:52:18.148550 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:52:18.150585 systemd[1]: Starting systemd-hwdb-update.service... Oct 31 00:52:18.152756 systemd[1]: Starting systemd-journal-flush.service... Oct 31 00:52:18.153638 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:52:18.158984 systemd[1]: Starting systemd-random-seed.service... Oct 31 00:52:18.159990 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:52:18.161672 systemd-journald[1022]: Time spent on flushing to /var/log/journal/2bc353da700342f3909b3d486dd154a7 is 15.662ms for 933 entries. Oct 31 00:52:18.161672 systemd-journald[1022]: System Journal (/var/log/journal/2bc353da700342f3909b3d486dd154a7) is 8.0M, max 195.6M, 187.6M free. Oct 31 00:52:18.189591 systemd-journald[1022]: Received client request to flush runtime journal. Oct 31 00:52:18.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.161134 systemd[1]: Starting systemd-sysctl.service... Oct 31 00:52:18.168314 systemd[1]: Finished flatcar-tmpfiles.service. Oct 31 00:52:18.170378 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 31 00:52:18.171284 systemd[1]: Mounted sys-kernel-config.mount. Oct 31 00:52:18.173396 systemd[1]: Starting systemd-sysusers.service... Oct 31 00:52:18.177651 systemd[1]: Finished systemd-random-seed.service. Oct 31 00:52:18.178621 systemd[1]: Reached target first-boot-complete.target. Oct 31 00:52:18.186406 systemd[1]: Finished systemd-sysctl.service. Oct 31 00:52:18.190482 systemd[1]: Finished systemd-journal-flush.service. Oct 31 00:52:18.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.191753 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 00:52:18.193890 systemd[1]: Starting systemd-udev-settle.service... Oct 31 00:52:18.196921 systemd[1]: Finished systemd-sysusers.service. Oct 31 00:52:18.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.199072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 00:52:18.205281 udevadm[1081]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 31 00:52:18.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.214826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 00:52:18.567942 systemd[1]: Finished systemd-hwdb-update.service. Oct 31 00:52:18.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.570131 systemd[1]: Starting systemd-udevd.service... Oct 31 00:52:18.587709 systemd-udevd[1087]: Using default interface naming scheme 'v252'. Oct 31 00:52:18.602962 systemd[1]: Started systemd-udevd.service. Oct 31 00:52:18.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.605747 systemd[1]: Starting systemd-networkd.service... Oct 31 00:52:18.613457 systemd[1]: Starting systemd-userdbd.service... Oct 31 00:52:18.621080 systemd[1]: Found device dev-ttyAMA0.device. Oct 31 00:52:18.643858 systemd[1]: Started systemd-userdbd.service. Oct 31 00:52:18.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.673186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 00:52:18.698554 systemd-networkd[1097]: lo: Link UP Oct 31 00:52:18.698563 systemd-networkd[1097]: lo: Gained carrier Oct 31 00:52:18.698916 systemd-networkd[1097]: Enumeration completed Oct 31 00:52:18.699047 systemd[1]: Started systemd-networkd.service. Oct 31 00:52:18.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.700100 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:52:18.701244 systemd-networkd[1097]: eth0: Link UP Oct 31 00:52:18.701255 systemd-networkd[1097]: eth0: Gained carrier Oct 31 00:52:18.703461 systemd[1]: Finished systemd-udev-settle.service. Oct 31 00:52:18.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.705660 systemd[1]: Starting lvm2-activation-early.service... Oct 31 00:52:18.715531 lvm[1121]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:52:18.731242 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:52:18.751899 systemd[1]: Finished lvm2-activation-early.service. Oct 31 00:52:18.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.752973 systemd[1]: Reached target cryptsetup.target. Oct 31 00:52:18.755134 systemd[1]: Starting lvm2-activation.service... Oct 31 00:52:18.758825 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:52:18.787011 systemd[1]: Finished lvm2-activation.service. Oct 31 00:52:18.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.787933 systemd[1]: Reached target local-fs-pre.target. Oct 31 00:52:18.788826 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:52:18.788858 systemd[1]: Reached target local-fs.target. Oct 31 00:52:18.789665 systemd[1]: Reached target machines.target. Oct 31 00:52:18.792065 systemd[1]: Starting ldconfig.service... Oct 31 00:52:18.793156 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:52:18.793212 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:52:18.794404 systemd[1]: Starting systemd-boot-update.service... Oct 31 00:52:18.796231 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 31 00:52:18.798509 systemd[1]: Starting systemd-machine-id-commit.service... Oct 31 00:52:18.800506 systemd[1]: Starting systemd-sysext.service... Oct 31 00:52:18.801672 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1126 (bootctl) Oct 31 00:52:18.802871 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 31 00:52:18.811421 systemd[1]: Unmounting usr-share-oem.mount... Oct 31 00:52:18.817770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 31 00:52:18.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.821994 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 31 00:52:18.822284 systemd[1]: Unmounted usr-share-oem.mount. Oct 31 00:52:18.880042 kernel: loop0: detected capacity change from 0 to 207008 Oct 31 00:52:18.884424 systemd[1]: Finished systemd-machine-id-commit.service. Oct 31 00:52:18.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.895038 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:52:18.899265 systemd-fsck[1138]: fsck.fat 4.2 (2021-01-31) Oct 31 00:52:18.899265 systemd-fsck[1138]: /dev/vda1: 236 files, 117310/258078 clusters Oct 31 00:52:18.902036 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 31 00:52:18.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.910121 kernel: loop1: detected capacity change from 0 to 207008 Oct 31 00:52:18.918038 (sd-sysext)[1144]: Using extensions 'kubernetes'. Oct 31 00:52:18.918994 (sd-sysext)[1144]: Merged extensions into '/usr'. Oct 31 00:52:18.936212 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:52:18.937531 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:52:18.939475 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:52:18.941558 systemd[1]: Starting modprobe@loop.service... Oct 31 00:52:18.942508 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:52:18.942645 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:52:18.943519 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:52:18.943666 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:52:18.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.945184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:52:18.945334 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:52:18.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.946796 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:52:18.946949 systemd[1]: Finished modprobe@loop.service. Oct 31 00:52:18.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:18.948505 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:52:18.948601 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:52:18.997900 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:52:19.001561 systemd[1]: Finished ldconfig.service. Oct 31 00:52:19.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.116594 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:52:19.118518 systemd[1]: Mounting boot.mount... Oct 31 00:52:19.120368 systemd[1]: Mounting usr-share-oem.mount... Oct 31 00:52:19.125094 systemd[1]: Mounted usr-share-oem.mount. Oct 31 00:52:19.127690 systemd[1]: Finished systemd-sysext.service. Oct 31 00:52:19.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.128671 systemd[1]: Mounted boot.mount. Oct 31 00:52:19.131586 systemd[1]: Starting ensure-sysext.service... Oct 31 00:52:19.133415 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 31 00:52:19.137116 systemd[1]: Finished systemd-boot-update.service. Oct 31 00:52:19.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.140907 systemd[1]: Reloading. Oct 31 00:52:19.145687 systemd-tmpfiles[1161]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 31 00:52:19.146468 systemd-tmpfiles[1161]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:52:19.147868 systemd-tmpfiles[1161]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:52:19.179206 /usr/lib/systemd/system-generators/torcx-generator[1182]: time="2025-10-31T00:52:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 00:52:19.180152 /usr/lib/systemd/system-generators/torcx-generator[1182]: time="2025-10-31T00:52:19Z" level=info msg="torcx already run" Oct 31 00:52:19.256972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:52:19.256995 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:52:19.279021 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:52:19.328361 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 31 00:52:19.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.332573 systemd[1]: Starting audit-rules.service... Oct 31 00:52:19.334549 systemd[1]: Starting clean-ca-certificates.service... Oct 31 00:52:19.336758 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 31 00:52:19.339254 systemd[1]: Starting systemd-resolved.service... Oct 31 00:52:19.341367 systemd[1]: Starting systemd-timesyncd.service... Oct 31 00:52:19.343304 systemd[1]: Starting systemd-update-utmp.service... Oct 31 00:52:19.344671 systemd[1]: Finished clean-ca-certificates.service. Oct 31 00:52:19.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.348032 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:52:19.350661 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.351947 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:52:19.354961 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:52:19.356953 systemd[1]: Starting modprobe@loop.service... Oct 31 00:52:19.357823 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.357954 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:52:19.358068 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:52:19.358831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:52:19.358999 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:52:19.359000 audit[1238]: SYSTEM_BOOT pid=1238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.361051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:52:19.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.361211 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:52:19.362669 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:52:19.362810 systemd[1]: Finished modprobe@loop.service. Oct 31 00:52:19.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.365761 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:52:19.365914 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.367946 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.369596 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:52:19.371740 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:52:19.373996 systemd[1]: Starting modprobe@loop.service... Oct 31 00:52:19.374789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.374927 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:52:19.375044 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:52:19.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.376156 systemd[1]: Finished systemd-update-utmp.service. Oct 31 00:52:19.377623 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 31 00:52:19.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.379359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:52:19.379688 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:52:19.381122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:52:19.381301 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:52:19.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.383477 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:52:19.383637 systemd[1]: Finished modprobe@loop.service. Oct 31 00:52:19.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.387676 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.388912 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 00:52:19.391089 systemd[1]: Starting modprobe@drm.service... Oct 31 00:52:19.394212 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 00:52:19.396316 systemd[1]: Starting modprobe@loop.service... Oct 31 00:52:19.397098 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.397243 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:52:19.398513 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 31 00:52:19.401035 systemd[1]: Starting systemd-update-done.service... Oct 31 00:52:19.401842 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:52:19.403122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:52:19.403290 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 00:52:19.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.405651 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:52:19.405824 systemd[1]: Finished modprobe@drm.service. Oct 31 00:52:19.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.407334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:52:19.407495 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 00:52:19.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.408781 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:52:19.408976 systemd[1]: Finished modprobe@loop.service. Oct 31 00:52:19.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.410458 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:52:19.410553 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.418926 systemd[1]: Finished ensure-sysext.service. Oct 31 00:52:19.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.420462 systemd[1]: Finished systemd-update-done.service. Oct 31 00:52:19.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 00:52:19.434000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 31 00:52:19.434000 audit[1280]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe6c924a0 a2=420 a3=0 items=0 ppid=1228 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 00:52:19.434000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 31 00:52:19.435035 augenrules[1280]: No rules Oct 31 00:52:19.435889 systemd[1]: Finished audit-rules.service. Oct 31 00:52:19.439659 systemd[1]: Started systemd-timesyncd.service. Oct 31 00:52:19.440706 systemd[1]: Reached target time-set.target. Oct 31 00:52:19.905149 systemd-timesyncd[1234]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 00:52:19.905219 systemd-timesyncd[1234]: Initial clock synchronization to Fri 2025-10-31 00:52:19.905056 UTC. Oct 31 00:52:19.907062 systemd-resolved[1232]: Positive Trust Anchors: Oct 31 00:52:19.907339 systemd-resolved[1232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:52:19.907417 systemd-resolved[1232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 00:52:19.917137 systemd-resolved[1232]: Defaulting to hostname 'linux'. Oct 31 00:52:19.918784 systemd[1]: Started systemd-resolved.service. Oct 31 00:52:19.919834 systemd[1]: Reached target network.target. Oct 31 00:52:19.920713 systemd[1]: Reached target nss-lookup.target. Oct 31 00:52:19.921576 systemd[1]: Reached target sysinit.target. Oct 31 00:52:19.922443 systemd[1]: Started motdgen.path. Oct 31 00:52:19.923122 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 31 00:52:19.924330 systemd[1]: Started logrotate.timer. Oct 31 00:52:19.925140 systemd[1]: Started mdadm.timer. Oct 31 00:52:19.925858 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 31 00:52:19.926692 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:52:19.926729 systemd[1]: Reached target paths.target. Oct 31 00:52:19.927497 systemd[1]: Reached target timers.target. Oct 31 00:52:19.928605 systemd[1]: Listening on dbus.socket. Oct 31 00:52:19.930517 systemd[1]: Starting docker.socket... Oct 31 00:52:19.932272 systemd[1]: Listening on sshd.socket. Oct 31 00:52:19.933079 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:52:19.933447 systemd[1]: Listening on docker.socket. Oct 31 00:52:19.934182 systemd[1]: Reached target sockets.target. Oct 31 00:52:19.934905 systemd[1]: Reached target basic.target. Oct 31 00:52:19.935862 systemd[1]: System is tainted: cgroupsv1 Oct 31 00:52:19.935915 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.935936 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 00:52:19.936973 systemd[1]: Starting containerd.service... Oct 31 00:52:19.938859 systemd[1]: Starting dbus.service... Oct 31 00:52:19.940743 systemd[1]: Starting enable-oem-cloudinit.service... Oct 31 00:52:19.943181 systemd[1]: Starting extend-filesystems.service... Oct 31 00:52:19.944187 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 31 00:52:19.945467 systemd[1]: Starting motdgen.service... Oct 31 00:52:19.947019 jq[1292]: false Oct 31 00:52:19.947399 systemd[1]: Starting prepare-helm.service... Oct 31 00:52:19.949575 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 31 00:52:19.952487 systemd[1]: Starting sshd-keygen.service... Oct 31 00:52:19.955587 systemd[1]: Starting systemd-logind.service... Oct 31 00:52:19.956344 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 00:52:19.956438 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:52:19.957770 systemd[1]: Starting update-engine.service... Oct 31 00:52:19.960221 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 31 00:52:19.964065 jq[1309]: true Oct 31 00:52:19.964325 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:52:19.964997 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 31 00:52:19.966322 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:52:19.966559 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 31 00:52:19.975322 jq[1319]: true Oct 31 00:52:19.978788 tar[1314]: linux-arm64/LICENSE Oct 31 00:52:19.979224 tar[1314]: linux-arm64/helm Oct 31 00:52:19.987019 extend-filesystems[1293]: Found loop1 Oct 31 00:52:19.987019 extend-filesystems[1293]: Found vda Oct 31 00:52:19.989414 extend-filesystems[1293]: Found vda1 Oct 31 00:52:19.989414 extend-filesystems[1293]: Found vda2 Oct 31 00:52:19.989414 extend-filesystems[1293]: Found vda3 Oct 31 00:52:19.989414 extend-filesystems[1293]: Found usr Oct 31 00:52:19.989414 extend-filesystems[1293]: Found vda4 Oct 31 00:52:19.989414 extend-filesystems[1293]: Found vda6 Oct 31 00:52:19.989414 extend-filesystems[1293]: Found vda7 Oct 31 00:52:19.989414 extend-filesystems[1293]: Found vda9 Oct 31 00:52:19.989414 extend-filesystems[1293]: Checking size of /dev/vda9 Oct 31 00:52:19.995546 dbus-daemon[1291]: [system] SELinux support is enabled Oct 31 00:52:20.020508 bash[1344]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:52:19.995741 systemd[1]: Started dbus.service. Oct 31 00:52:19.998901 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:52:19.998925 systemd[1]: Reached target system-config.target. Oct 31 00:52:19.999962 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:52:19.999982 systemd[1]: Reached target user-config.target. Oct 31 00:52:20.001621 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:52:20.001886 systemd[1]: Finished motdgen.service. Oct 31 00:52:20.020067 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 31 00:52:20.022521 extend-filesystems[1293]: Resized partition /dev/vda9 Oct 31 00:52:20.024801 extend-filesystems[1350]: resize2fs 1.46.5 (30-Dec-2021) Oct 31 00:52:20.036183 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 00:52:20.047306 systemd-logind[1304]: Watching system buttons on /dev/input/event0 (Power Button) Oct 31 00:52:20.047580 systemd-logind[1304]: New seat seat0. Oct 31 00:52:20.049593 systemd[1]: Started systemd-logind.service. Oct 31 00:52:20.052299 update_engine[1306]: I1031 00:52:20.051723 1306 main.cc:92] Flatcar Update Engine starting Oct 31 00:52:20.056307 systemd[1]: Started update-engine.service. Oct 31 00:52:20.056655 update_engine[1306]: I1031 00:52:20.056551 1306 update_check_scheduler.cc:74] Next update check in 8m38s Oct 31 00:52:20.060927 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 00:52:20.059716 systemd[1]: Started locksmithd.service. Oct 31 00:52:20.068420 extend-filesystems[1350]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 00:52:20.068420 extend-filesystems[1350]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 00:52:20.068420 extend-filesystems[1350]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 00:52:20.074904 extend-filesystems[1293]: Resized filesystem in /dev/vda9 Oct 31 00:52:20.070341 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:52:20.070591 systemd[1]: Finished extend-filesystems.service. Oct 31 00:52:20.077109 env[1321]: time="2025-10-31T00:52:20.077059425Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 31 00:52:20.100963 env[1321]: time="2025-10-31T00:52:20.100858465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:52:20.101313 env[1321]: time="2025-10-31T00:52:20.101285905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:52:20.104358 env[1321]: time="2025-10-31T00:52:20.104295905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:52:20.104358 env[1321]: time="2025-10-31T00:52:20.104331305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:52:20.104736 env[1321]: time="2025-10-31T00:52:20.104703585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:52:20.104736 env[1321]: time="2025-10-31T00:52:20.104727785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:52:20.104823 env[1321]: time="2025-10-31T00:52:20.104742625Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 31 00:52:20.104823 env[1321]: time="2025-10-31T00:52:20.104752505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:52:20.104861 env[1321]: time="2025-10-31T00:52:20.104833505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:52:20.105237 env[1321]: time="2025-10-31T00:52:20.105146065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:52:20.105407 env[1321]: time="2025-10-31T00:52:20.105384465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:52:20.105441 env[1321]: time="2025-10-31T00:52:20.105406305Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:52:20.105486 env[1321]: time="2025-10-31T00:52:20.105467865Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 31 00:52:20.105517 env[1321]: time="2025-10-31T00:52:20.105487465Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:52:20.108992 env[1321]: time="2025-10-31T00:52:20.108960625Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:52:20.109058 env[1321]: time="2025-10-31T00:52:20.108995425Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:52:20.109058 env[1321]: time="2025-10-31T00:52:20.109009225Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:52:20.109058 env[1321]: time="2025-10-31T00:52:20.109036505Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109058 env[1321]: time="2025-10-31T00:52:20.109050265Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109164 env[1321]: time="2025-10-31T00:52:20.109063585Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109164 env[1321]: time="2025-10-31T00:52:20.109076465Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109485 env[1321]: time="2025-10-31T00:52:20.109459985Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109569 env[1321]: time="2025-10-31T00:52:20.109488105Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109569 env[1321]: time="2025-10-31T00:52:20.109501865Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109569 env[1321]: time="2025-10-31T00:52:20.109514065Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.109639 env[1321]: time="2025-10-31T00:52:20.109620345Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:52:20.109777 env[1321]: time="2025-10-31T00:52:20.109740705Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:52:20.109839 env[1321]: time="2025-10-31T00:52:20.109819505Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:52:20.110221 env[1321]: time="2025-10-31T00:52:20.110126745Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:52:20.110221 env[1321]: time="2025-10-31T00:52:20.110192785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110221 env[1321]: time="2025-10-31T00:52:20.110210705Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:52:20.110345 env[1321]: time="2025-10-31T00:52:20.110324865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110390 env[1321]: time="2025-10-31T00:52:20.110345025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110390 env[1321]: time="2025-10-31T00:52:20.110357865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110390 env[1321]: time="2025-10-31T00:52:20.110369505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110390 env[1321]: time="2025-10-31T00:52:20.110381185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110468 env[1321]: time="2025-10-31T00:52:20.110393745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110468 env[1321]: time="2025-10-31T00:52:20.110406305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110468 env[1321]: time="2025-10-31T00:52:20.110424705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110468 env[1321]: time="2025-10-31T00:52:20.110437385Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:52:20.110587 env[1321]: time="2025-10-31T00:52:20.110563545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110609 env[1321]: time="2025-10-31T00:52:20.110584985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110609 env[1321]: time="2025-10-31T00:52:20.110599545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110650 env[1321]: time="2025-10-31T00:52:20.110611145Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:52:20.110650 env[1321]: time="2025-10-31T00:52:20.110626625Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 31 00:52:20.110650 env[1321]: time="2025-10-31T00:52:20.110637145Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:52:20.110705 env[1321]: time="2025-10-31T00:52:20.110654265Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 31 00:52:20.110705 env[1321]: time="2025-10-31T00:52:20.110691225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:52:20.110944 env[1321]: time="2025-10-31T00:52:20.110882145Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:52:20.113568 env[1321]: time="2025-10-31T00:52:20.110956985Z" level=info msg="Connect containerd service" Oct 31 00:52:20.113568 env[1321]: time="2025-10-31T00:52:20.110989345Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:52:20.113568 env[1321]: time="2025-10-31T00:52:20.111822745Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:52:20.113568 env[1321]: time="2025-10-31T00:52:20.112135945Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:52:20.113568 env[1321]: time="2025-10-31T00:52:20.112191825Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:52:20.113568 env[1321]: time="2025-10-31T00:52:20.112237945Z" level=info msg="containerd successfully booted in 0.037508s" Oct 31 00:52:20.112440 systemd[1]: Started containerd.service. Oct 31 00:52:20.115735 env[1321]: time="2025-10-31T00:52:20.114392185Z" level=info msg="Start subscribing containerd event" Oct 31 00:52:20.115735 env[1321]: time="2025-10-31T00:52:20.114457985Z" level=info msg="Start recovering state" Oct 31 00:52:20.115735 env[1321]: time="2025-10-31T00:52:20.114735185Z" level=info msg="Start event monitor" Oct 31 00:52:20.115735 env[1321]: time="2025-10-31T00:52:20.114765185Z" level=info msg="Start snapshots syncer" Oct 31 00:52:20.115735 env[1321]: time="2025-10-31T00:52:20.114776145Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:52:20.115735 env[1321]: time="2025-10-31T00:52:20.114788625Z" level=info msg="Start streaming server" Oct 31 00:52:20.119703 locksmithd[1352]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:52:20.401297 tar[1314]: linux-arm64/README.md Oct 31 00:52:20.408611 systemd[1]: Finished prepare-helm.service. Oct 31 00:52:20.772433 systemd-networkd[1097]: eth0: Gained IPv6LL Oct 31 00:52:20.774200 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 31 00:52:20.775651 systemd[1]: Reached target network-online.target. Oct 31 00:52:20.778920 systemd[1]: Starting kubelet.service... Oct 31 00:52:21.406510 systemd[1]: Started kubelet.service. Oct 31 00:52:21.849469 sshd_keygen[1308]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:52:21.866438 kubelet[1377]: E1031 00:52:21.866382 1377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:52:21.868955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:52:21.869087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:52:21.871730 systemd[1]: Finished sshd-keygen.service. Oct 31 00:52:21.874105 systemd[1]: Starting issuegen.service... Oct 31 00:52:21.879873 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:52:21.880720 systemd[1]: Finished issuegen.service. Oct 31 00:52:21.883074 systemd[1]: Starting systemd-user-sessions.service... Oct 31 00:52:21.889949 systemd[1]: Finished systemd-user-sessions.service. Oct 31 00:52:21.892550 systemd[1]: Started getty@tty1.service. Oct 31 00:52:21.895333 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 31 00:52:21.896474 systemd[1]: Reached target getty.target. Oct 31 00:52:21.897587 systemd[1]: Reached target multi-user.target. Oct 31 00:52:21.900341 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 31 00:52:21.907095 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 31 00:52:21.907326 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 31 00:52:21.908365 systemd[1]: Startup finished in 5.154s (kernel) + 5.356s (userspace) = 10.511s. Oct 31 00:52:24.839416 systemd[1]: Created slice system-sshd.slice. Oct 31 00:52:24.840562 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:55290.service. Oct 31 00:52:24.903171 sshd[1403]: Accepted publickey for core from 10.0.0.1 port 55290 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:52:24.906757 sshd[1403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:52:24.926372 systemd-logind[1304]: New session 1 of user core. Oct 31 00:52:24.926917 systemd[1]: Created slice user-500.slice. Oct 31 00:52:24.929246 systemd[1]: Starting user-runtime-dir@500.service... Oct 31 00:52:24.941569 systemd[1]: Finished user-runtime-dir@500.service. Oct 31 00:52:24.942880 systemd[1]: Starting user@500.service... Oct 31 00:52:24.948827 (systemd)[1407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:52:25.011092 systemd[1407]: Queued start job for default target default.target. Oct 31 00:52:25.011664 systemd[1407]: Reached target paths.target. Oct 31 00:52:25.011769 systemd[1407]: Reached target sockets.target. Oct 31 00:52:25.011840 systemd[1407]: Reached target timers.target. Oct 31 00:52:25.011909 systemd[1407]: Reached target basic.target. Oct 31 00:52:25.012013 systemd[1407]: Reached target default.target. Oct 31 00:52:25.012101 systemd[1407]: Startup finished in 57ms. Oct 31 00:52:25.012119 systemd[1]: Started user@500.service. Oct 31 00:52:25.013073 systemd[1]: Started session-1.scope. Oct 31 00:52:25.068804 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:55300.service. Oct 31 00:52:25.139001 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 55300 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:52:25.140773 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:52:25.145318 systemd[1]: Started session-2.scope. Oct 31 00:52:25.145782 systemd-logind[1304]: New session 2 of user core. Oct 31 00:52:25.200761 sshd[1417]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:25.203073 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:55316.service. Oct 31 00:52:25.203669 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:55300.service: Deactivated successfully. Oct 31 00:52:25.204471 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 00:52:25.205387 systemd-logind[1304]: Session 2 logged out. Waiting for processes to exit. Oct 31 00:52:25.206332 systemd-logind[1304]: Removed session 2. Oct 31 00:52:25.249268 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 55316 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:52:25.249066 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:52:25.254797 systemd[1]: Started session-3.scope. Oct 31 00:52:25.254969 systemd-logind[1304]: New session 3 of user core. Oct 31 00:52:25.313855 sshd[1422]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:25.316066 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:55330.service. Oct 31 00:52:25.316761 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:55316.service: Deactivated successfully. Oct 31 00:52:25.317604 systemd-logind[1304]: Session 3 logged out. Waiting for processes to exit. Oct 31 00:52:25.317625 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 00:52:25.318436 systemd-logind[1304]: Removed session 3. Oct 31 00:52:25.360810 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 55330 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:52:25.362183 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:52:25.365873 systemd-logind[1304]: New session 4 of user core. Oct 31 00:52:25.366696 systemd[1]: Started session-4.scope. Oct 31 00:52:25.421832 sshd[1429]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:25.425020 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:55330.service: Deactivated successfully. Oct 31 00:52:25.425899 systemd-logind[1304]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:52:25.427015 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:55340.service. Oct 31 00:52:25.427491 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:52:25.427992 systemd-logind[1304]: Removed session 4. Oct 31 00:52:25.471963 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 55340 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:52:25.473371 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:52:25.477465 systemd-logind[1304]: New session 5 of user core. Oct 31 00:52:25.477903 systemd[1]: Started session-5.scope. Oct 31 00:52:25.546193 sudo[1442]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:52:25.546432 sudo[1442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 00:52:25.588374 systemd[1]: Starting docker.service... Oct 31 00:52:25.665125 env[1454]: time="2025-10-31T00:52:25.665063745Z" level=info msg="Starting up" Oct 31 00:52:25.666759 env[1454]: time="2025-10-31T00:52:25.666725985Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 00:52:25.666850 env[1454]: time="2025-10-31T00:52:25.666836465Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 00:52:25.666932 env[1454]: time="2025-10-31T00:52:25.666916745Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 00:52:25.666987 env[1454]: time="2025-10-31T00:52:25.666974905Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 00:52:25.668830 env[1454]: time="2025-10-31T00:52:25.668806545Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 00:52:25.668830 env[1454]: time="2025-10-31T00:52:25.668827785Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 00:52:25.668934 env[1454]: time="2025-10-31T00:52:25.668842585Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 00:52:25.668934 env[1454]: time="2025-10-31T00:52:25.668853025Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 00:52:25.865900 env[1454]: time="2025-10-31T00:52:25.865379585Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 31 00:52:25.866063 env[1454]: time="2025-10-31T00:52:25.866046585Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 31 00:52:25.866304 env[1454]: time="2025-10-31T00:52:25.866284945Z" level=info msg="Loading containers: start." Oct 31 00:52:26.100188 kernel: Initializing XFRM netlink socket Oct 31 00:52:26.123753 env[1454]: time="2025-10-31T00:52:26.123529385Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 31 00:52:26.179029 systemd-networkd[1097]: docker0: Link UP Oct 31 00:52:26.200881 env[1454]: time="2025-10-31T00:52:26.200846065Z" level=info msg="Loading containers: done." Oct 31 00:52:26.222924 env[1454]: time="2025-10-31T00:52:26.222861585Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:52:26.223167 env[1454]: time="2025-10-31T00:52:26.223047105Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 31 00:52:26.223167 env[1454]: time="2025-10-31T00:52:26.223138905Z" level=info msg="Daemon has completed initialization" Oct 31 00:52:26.237511 systemd[1]: Started docker.service. Oct 31 00:52:26.246529 env[1454]: time="2025-10-31T00:52:26.246409025Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:52:26.886508 env[1321]: time="2025-10-31T00:52:26.886450585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 00:52:27.482068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874674121.mount: Deactivated successfully. Oct 31 00:52:28.725290 env[1321]: time="2025-10-31T00:52:28.725239985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:28.726885 env[1321]: time="2025-10-31T00:52:28.726855505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:28.729007 env[1321]: time="2025-10-31T00:52:28.728978065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:28.731132 env[1321]: time="2025-10-31T00:52:28.731091385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:28.732090 env[1321]: time="2025-10-31T00:52:28.732062665Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 31 00:52:28.732739 env[1321]: time="2025-10-31T00:52:28.732678345Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 00:52:30.142046 env[1321]: time="2025-10-31T00:52:30.141982345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:30.143537 env[1321]: time="2025-10-31T00:52:30.143500665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:30.145226 env[1321]: time="2025-10-31T00:52:30.145197545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:30.147232 env[1321]: time="2025-10-31T00:52:30.147200265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:30.148046 env[1321]: time="2025-10-31T00:52:30.148021865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 31 00:52:30.148524 env[1321]: time="2025-10-31T00:52:30.148485225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 00:52:31.447497 env[1321]: time="2025-10-31T00:52:31.447432865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:31.449038 env[1321]: time="2025-10-31T00:52:31.449006905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:31.451502 env[1321]: time="2025-10-31T00:52:31.450567225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:31.452665 env[1321]: time="2025-10-31T00:52:31.452639545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:31.453558 env[1321]: time="2025-10-31T00:52:31.453506985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 31 00:52:31.454379 env[1321]: time="2025-10-31T00:52:31.454352625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 00:52:31.941944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:52:31.942114 systemd[1]: Stopped kubelet.service. Oct 31 00:52:31.943645 systemd[1]: Starting kubelet.service... Oct 31 00:52:32.035838 systemd[1]: Started kubelet.service. Oct 31 00:52:32.080648 kubelet[1593]: E1031 00:52:32.080592 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:52:32.083024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:52:32.083201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:52:32.752493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554709793.mount: Deactivated successfully. Oct 31 00:52:33.349724 env[1321]: time="2025-10-31T00:52:33.349629625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:33.351253 env[1321]: time="2025-10-31T00:52:33.351217025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:33.352786 env[1321]: time="2025-10-31T00:52:33.352763105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:33.354346 env[1321]: time="2025-10-31T00:52:33.354319585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:33.354834 env[1321]: time="2025-10-31T00:52:33.354808225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 31 00:52:33.355342 env[1321]: time="2025-10-31T00:52:33.355315465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 00:52:33.921976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690549106.mount: Deactivated successfully. Oct 31 00:52:34.903017 env[1321]: time="2025-10-31T00:52:34.902928825Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:34.904551 env[1321]: time="2025-10-31T00:52:34.904503905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:34.906878 env[1321]: time="2025-10-31T00:52:34.906826745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:34.909082 env[1321]: time="2025-10-31T00:52:34.908984505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:34.910124 env[1321]: time="2025-10-31T00:52:34.910090865Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 31 00:52:34.911560 env[1321]: time="2025-10-31T00:52:34.911524145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 00:52:35.394471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076101078.mount: Deactivated successfully. Oct 31 00:52:35.399941 env[1321]: time="2025-10-31T00:52:35.399898465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:35.402033 env[1321]: time="2025-10-31T00:52:35.401993345Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:35.404190 env[1321]: time="2025-10-31T00:52:35.404161425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:35.405748 env[1321]: time="2025-10-31T00:52:35.405702185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:35.406373 env[1321]: time="2025-10-31T00:52:35.406342465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 31 00:52:35.407322 env[1321]: time="2025-10-31T00:52:35.407283305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 00:52:35.917015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount305098603.mount: Deactivated successfully. Oct 31 00:52:38.156430 env[1321]: time="2025-10-31T00:52:38.156375225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:38.159123 env[1321]: time="2025-10-31T00:52:38.159074385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:38.161466 env[1321]: time="2025-10-31T00:52:38.161432545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:38.164883 env[1321]: time="2025-10-31T00:52:38.164841225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:38.166167 env[1321]: time="2025-10-31T00:52:38.166119345Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 31 00:52:42.192234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 00:52:42.192414 systemd[1]: Stopped kubelet.service. Oct 31 00:52:42.194440 systemd[1]: Starting kubelet.service... Oct 31 00:52:42.288674 systemd[1]: Started kubelet.service. Oct 31 00:52:42.321602 kubelet[1630]: E1031 00:52:42.321562 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:52:42.323498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:52:42.323638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:52:43.056809 systemd[1]: Stopped kubelet.service. Oct 31 00:52:43.058863 systemd[1]: Starting kubelet.service... Oct 31 00:52:43.081574 systemd[1]: Reloading. Oct 31 00:52:43.135194 /usr/lib/systemd/system-generators/torcx-generator[1667]: time="2025-10-31T00:52:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 00:52:43.135224 /usr/lib/systemd/system-generators/torcx-generator[1667]: time="2025-10-31T00:52:43Z" level=info msg="torcx already run" Oct 31 00:52:43.309792 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:52:43.310091 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:52:43.326277 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:52:43.392641 systemd[1]: Started kubelet.service. Oct 31 00:52:43.394283 systemd[1]: Stopping kubelet.service... Oct 31 00:52:43.394721 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:52:43.394976 systemd[1]: Stopped kubelet.service. Oct 31 00:52:43.396941 systemd[1]: Starting kubelet.service... Oct 31 00:52:43.492893 systemd[1]: Started kubelet.service. Oct 31 00:52:43.536456 kubelet[1726]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:52:43.536804 kubelet[1726]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:52:43.536852 kubelet[1726]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:52:43.537005 kubelet[1726]: I1031 00:52:43.536966 1726 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:52:44.469485 kubelet[1726]: I1031 00:52:44.469431 1726 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:52:44.469485 kubelet[1726]: I1031 00:52:44.469470 1726 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:52:44.469756 kubelet[1726]: I1031 00:52:44.469738 1726 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:52:44.490956 kubelet[1726]: E1031 00:52:44.490912 1726 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:44.492760 kubelet[1726]: I1031 00:52:44.492707 1726 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:52:44.499198 kubelet[1726]: E1031 00:52:44.499129 1726 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:52:44.499318 kubelet[1726]: I1031 00:52:44.499301 1726 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:52:44.502237 kubelet[1726]: I1031 00:52:44.502213 1726 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:52:44.503188 kubelet[1726]: I1031 00:52:44.503136 1726 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:52:44.503371 kubelet[1726]: I1031 00:52:44.503189 1726 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 00:52:44.503460 kubelet[1726]: I1031 00:52:44.503437 1726 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:52:44.503460 kubelet[1726]: I1031 00:52:44.503448 1726 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:52:44.503683 kubelet[1726]: I1031 00:52:44.503653 1726 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:52:44.508100 kubelet[1726]: I1031 00:52:44.508043 1726 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:52:44.508100 kubelet[1726]: I1031 00:52:44.508084 1726 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:52:44.508201 kubelet[1726]: I1031 00:52:44.508105 1726 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:52:44.508201 kubelet[1726]: I1031 00:52:44.508117 1726 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:52:44.520957 kubelet[1726]: I1031 00:52:44.520926 1726 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 00:52:44.528790 kubelet[1726]: W1031 00:52:44.528733 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:44.528959 kubelet[1726]: E1031 00:52:44.528936 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:44.529309 kubelet[1726]: I1031 00:52:44.529288 1726 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:52:44.529459 kubelet[1726]: W1031 00:52:44.529446 1726 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:52:44.530410 kubelet[1726]: W1031 00:52:44.530374 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:44.530518 kubelet[1726]: E1031 00:52:44.530498 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:44.531263 kubelet[1726]: I1031 00:52:44.531243 1726 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:52:44.531326 kubelet[1726]: I1031 00:52:44.531288 1726 server.go:1287] "Started kubelet" Oct 31 00:52:44.531378 kubelet[1726]: I1031 00:52:44.531352 1726 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:52:44.532828 kubelet[1726]: I1031 00:52:44.532395 1726 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:52:44.532828 kubelet[1726]: I1031 00:52:44.532746 1726 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:52:44.532828 kubelet[1726]: I1031 00:52:44.532818 1726 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:52:44.536294 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Oct 31 00:52:44.536477 kubelet[1726]: I1031 00:52:44.536453 1726 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:52:44.536832 kubelet[1726]: E1031 00:52:44.536807 1726 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:52:44.537233 kubelet[1726]: I1031 00:52:44.537209 1726 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:52:44.537736 kubelet[1726]: I1031 00:52:44.537705 1726 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:52:44.537976 kubelet[1726]: E1031 00:52:44.537952 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:44.538321 kubelet[1726]: I1031 00:52:44.538296 1726 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:52:44.538404 kubelet[1726]: I1031 00:52:44.538390 1726 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:52:44.538965 kubelet[1726]: W1031 00:52:44.538905 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:44.538965 kubelet[1726]: E1031 00:52:44.538957 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:44.539143 kubelet[1726]: E1031 00:52:44.539029 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" Oct 31 00:52:44.539310 kubelet[1726]: I1031 00:52:44.539287 1726 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:52:44.539426 kubelet[1726]: I1031 00:52:44.539398 1726 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:52:44.540566 kubelet[1726]: E1031 00:52:44.540272 1726 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736d37899afe61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:52:44.531261025 +0000 UTC m=+1.029417881,LastTimestamp:2025-10-31 00:52:44.531261025 +0000 UTC m=+1.029417881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:52:44.540886 kubelet[1726]: I1031 00:52:44.540866 1726 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:52:44.556919 kubelet[1726]: I1031 00:52:44.556869 1726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:52:44.558091 kubelet[1726]: I1031 00:52:44.557906 1726 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:52:44.558091 kubelet[1726]: I1031 00:52:44.557929 1726 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:52:44.558091 kubelet[1726]: I1031 00:52:44.557949 1726 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:52:44.558091 kubelet[1726]: I1031 00:52:44.557959 1726 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:52:44.558091 kubelet[1726]: E1031 00:52:44.558002 1726 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:52:44.558641 kubelet[1726]: I1031 00:52:44.558620 1726 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:52:44.558730 kubelet[1726]: I1031 00:52:44.558717 1726 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:52:44.558801 kubelet[1726]: I1031 00:52:44.558791 1726 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:52:44.559820 kubelet[1726]: W1031 00:52:44.559787 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:44.559937 kubelet[1726]: E1031 00:52:44.559916 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:44.638603 kubelet[1726]: E1031 00:52:44.638570 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:44.658805 kubelet[1726]: E1031 00:52:44.658762 1726 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 00:52:44.740178 kubelet[1726]: E1031 00:52:44.738983 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:44.741014 kubelet[1726]: E1031 00:52:44.740739 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" Oct 31 00:52:44.832515 kubelet[1726]: I1031 00:52:44.832460 1726 policy_none.go:49] "None policy: Start" Oct 31 00:52:44.832515 kubelet[1726]: I1031 00:52:44.832496 1726 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:52:44.832515 kubelet[1726]: I1031 00:52:44.832509 1726 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:52:44.838810 kubelet[1726]: I1031 00:52:44.838771 1726 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:52:44.838961 kubelet[1726]: I1031 00:52:44.838944 1726 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:52:44.838998 kubelet[1726]: I1031 00:52:44.838959 1726 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:52:44.839167 kubelet[1726]: E1031 00:52:44.839124 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:44.839572 kubelet[1726]: I1031 00:52:44.839358 1726 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:52:44.840188 kubelet[1726]: E1031 00:52:44.840167 1726 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:52:44.840376 kubelet[1726]: E1031 00:52:44.840360 1726 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:52:44.866072 kubelet[1726]: E1031 00:52:44.866024 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:44.866563 kubelet[1726]: E1031 00:52:44.866529 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:44.868013 kubelet[1726]: E1031 00:52:44.867983 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:44.939777 kubelet[1726]: I1031 00:52:44.939728 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a4e8400b2499f6ba1f36d7aeed3c36c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4e8400b2499f6ba1f36d7aeed3c36c\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:44.939777 kubelet[1726]: I1031 00:52:44.939766 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a4e8400b2499f6ba1f36d7aeed3c36c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4e8400b2499f6ba1f36d7aeed3c36c\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:44.939777 kubelet[1726]: I1031 00:52:44.939789 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:44.939986 kubelet[1726]: I1031 00:52:44.939805 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:52:44.939986 kubelet[1726]: I1031 00:52:44.939852 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a4e8400b2499f6ba1f36d7aeed3c36c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a4e8400b2499f6ba1f36d7aeed3c36c\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:44.939986 kubelet[1726]: I1031 00:52:44.939889 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:44.939986 kubelet[1726]: I1031 00:52:44.939911 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:44.939986 kubelet[1726]: I1031 00:52:44.939936 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:44.940119 kubelet[1726]: I1031 00:52:44.939968 1726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:44.941666 kubelet[1726]: I1031 00:52:44.941643 1726 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:52:44.942231 kubelet[1726]: E1031 00:52:44.942203 1726 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Oct 31 00:52:45.141965 kubelet[1726]: E1031 00:52:45.141826 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" Oct 31 00:52:45.144011 kubelet[1726]: I1031 00:52:45.143973 1726 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:52:45.144514 kubelet[1726]: E1031 00:52:45.144461 1726 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Oct 31 00:52:45.166748 kubelet[1726]: E1031 00:52:45.166715 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:45.167315 kubelet[1726]: E1031 00:52:45.167287 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:45.170552 kubelet[1726]: E1031 00:52:45.170370 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:45.171561 env[1321]: time="2025-10-31T00:52:45.171285985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a4e8400b2499f6ba1f36d7aeed3c36c,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:45.171901 env[1321]: time="2025-10-31T00:52:45.171707425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:45.171901 env[1321]: time="2025-10-31T00:52:45.171858225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:45.366518 kubelet[1726]: W1031 00:52:45.366378 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:45.366518 kubelet[1726]: E1031 00:52:45.366449 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:45.546278 kubelet[1726]: I1031 00:52:45.546247 1726 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:52:45.548500 kubelet[1726]: E1031 00:52:45.548462 1726 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Oct 31 00:52:45.691906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433557505.mount: Deactivated successfully. Oct 31 00:52:45.697129 env[1321]: time="2025-10-31T00:52:45.696321625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.700359 env[1321]: time="2025-10-31T00:52:45.700262585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.702795 env[1321]: time="2025-10-31T00:52:45.702690425Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.705265 env[1321]: time="2025-10-31T00:52:45.705227625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.707373 env[1321]: time="2025-10-31T00:52:45.707309745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.708514 env[1321]: time="2025-10-31T00:52:45.708452265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.712177 env[1321]: time="2025-10-31T00:52:45.712065785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.715651 env[1321]: time="2025-10-31T00:52:45.715541665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.719481 env[1321]: time="2025-10-31T00:52:45.719444785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.720343 env[1321]: time="2025-10-31T00:52:45.720315745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.721120 env[1321]: time="2025-10-31T00:52:45.721094385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.724615 env[1321]: time="2025-10-31T00:52:45.722014385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:52:45.759711 env[1321]: time="2025-10-31T00:52:45.758927865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:45.759711 env[1321]: time="2025-10-31T00:52:45.758969145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:45.759711 env[1321]: time="2025-10-31T00:52:45.758982625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:45.759888 env[1321]: time="2025-10-31T00:52:45.759652865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1704a10f3b018259fd3b88950f05139442305162ada022fb4a840331cbb5b2be pid=1769 runtime=io.containerd.runc.v2 Oct 31 00:52:45.762272 env[1321]: time="2025-10-31T00:52:45.761562305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:45.762272 env[1321]: time="2025-10-31T00:52:45.761597385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:45.762272 env[1321]: time="2025-10-31T00:52:45.761616625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:45.762272 env[1321]: time="2025-10-31T00:52:45.761792425Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/771df31e79f1535b53fe7b6945f03a07b7ced30eb346a9f5405e237da409dc56 pid=1788 runtime=io.containerd.runc.v2 Oct 31 00:52:45.766871 env[1321]: time="2025-10-31T00:52:45.766771265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:45.766871 env[1321]: time="2025-10-31T00:52:45.766874465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:45.766871 env[1321]: time="2025-10-31T00:52:45.766902905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:45.766871 env[1321]: time="2025-10-31T00:52:45.767222145Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c3311dcaf6ae72850e82d98494c7bb04f774382e163f51f96bef006f6263f8c pid=1804 runtime=io.containerd.runc.v2 Oct 31 00:52:45.837918 env[1321]: time="2025-10-31T00:52:45.837804785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a4e8400b2499f6ba1f36d7aeed3c36c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1704a10f3b018259fd3b88950f05139442305162ada022fb4a840331cbb5b2be\"" Oct 31 00:52:45.838847 kubelet[1726]: E1031 00:52:45.838810 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:45.841821 env[1321]: time="2025-10-31T00:52:45.841781265Z" level=info msg="CreateContainer within sandbox \"1704a10f3b018259fd3b88950f05139442305162ada022fb4a840331cbb5b2be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:52:45.842007 env[1321]: time="2025-10-31T00:52:45.841944385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"771df31e79f1535b53fe7b6945f03a07b7ced30eb346a9f5405e237da409dc56\"" Oct 31 00:52:45.843785 env[1321]: time="2025-10-31T00:52:45.843693065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c3311dcaf6ae72850e82d98494c7bb04f774382e163f51f96bef006f6263f8c\"" Oct 31 00:52:45.844409 kubelet[1726]: E1031 00:52:45.844388 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:45.844880 kubelet[1726]: E1031 00:52:45.844723 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:45.846353 env[1321]: time="2025-10-31T00:52:45.846312745Z" level=info msg="CreateContainer within sandbox \"771df31e79f1535b53fe7b6945f03a07b7ced30eb346a9f5405e237da409dc56\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:52:45.846774 env[1321]: time="2025-10-31T00:52:45.846745025Z" level=info msg="CreateContainer within sandbox \"0c3311dcaf6ae72850e82d98494c7bb04f774382e163f51f96bef006f6263f8c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:52:45.865210 kubelet[1726]: W1031 00:52:45.865089 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:45.865210 kubelet[1726]: E1031 00:52:45.865172 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:45.866169 env[1321]: time="2025-10-31T00:52:45.866111225Z" level=info msg="CreateContainer within sandbox \"1704a10f3b018259fd3b88950f05139442305162ada022fb4a840331cbb5b2be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68d8178ce95ec17c7f1d5c8677af6e95cbc71e7837c9ba2c4ec7ac32704e7f6f\"" Oct 31 00:52:45.867002 env[1321]: time="2025-10-31T00:52:45.866970265Z" level=info msg="StartContainer for \"68d8178ce95ec17c7f1d5c8677af6e95cbc71e7837c9ba2c4ec7ac32704e7f6f\"" Oct 31 00:52:45.877991 env[1321]: time="2025-10-31T00:52:45.877936105Z" level=info msg="CreateContainer within sandbox \"771df31e79f1535b53fe7b6945f03a07b7ced30eb346a9f5405e237da409dc56\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37f1549190abca449cd4467178b8afcffb84014b7ff4562f1ecef46ecd9a951e\"" Oct 31 00:52:45.878510 env[1321]: time="2025-10-31T00:52:45.878478465Z" level=info msg="StartContainer for \"37f1549190abca449cd4467178b8afcffb84014b7ff4562f1ecef46ecd9a951e\"" Oct 31 00:52:45.896793 kubelet[1726]: W1031 00:52:45.896689 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:45.896793 kubelet[1726]: E1031 00:52:45.896755 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:45.898328 env[1321]: time="2025-10-31T00:52:45.898283425Z" level=info msg="CreateContainer within sandbox \"0c3311dcaf6ae72850e82d98494c7bb04f774382e163f51f96bef006f6263f8c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c6a3cf66574116fc52094aa1f494352a68acbe484e16ec17447997d7aa1d17ff\"" Oct 31 00:52:45.899029 env[1321]: time="2025-10-31T00:52:45.898997865Z" level=info msg="StartContainer for \"c6a3cf66574116fc52094aa1f494352a68acbe484e16ec17447997d7aa1d17ff\"" Oct 31 00:52:45.941801 env[1321]: time="2025-10-31T00:52:45.941538385Z" level=info msg="StartContainer for \"68d8178ce95ec17c7f1d5c8677af6e95cbc71e7837c9ba2c4ec7ac32704e7f6f\" returns successfully" Oct 31 00:52:45.942329 kubelet[1726]: E1031 00:52:45.942287 1726 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="1.6s" Oct 31 00:52:45.953567 env[1321]: time="2025-10-31T00:52:45.953428185Z" level=info msg="StartContainer for \"37f1549190abca449cd4467178b8afcffb84014b7ff4562f1ecef46ecd9a951e\" returns successfully" Oct 31 00:52:45.957779 kubelet[1726]: W1031 00:52:45.957718 1726 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused Oct 31 00:52:45.957923 kubelet[1726]: E1031 00:52:45.957788 1726 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:52:45.982429 env[1321]: time="2025-10-31T00:52:45.982364705Z" level=info msg="StartContainer for \"c6a3cf66574116fc52094aa1f494352a68acbe484e16ec17447997d7aa1d17ff\" returns successfully" Oct 31 00:52:46.350270 kubelet[1726]: I1031 00:52:46.350238 1726 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:52:46.564545 kubelet[1726]: E1031 00:52:46.564513 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:46.564843 kubelet[1726]: E1031 00:52:46.564665 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:46.567329 kubelet[1726]: E1031 00:52:46.567297 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:46.567441 kubelet[1726]: E1031 00:52:46.567422 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:46.570064 kubelet[1726]: E1031 00:52:46.570025 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:46.570528 kubelet[1726]: E1031 00:52:46.570442 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:47.571706 kubelet[1726]: E1031 00:52:47.571391 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:47.571706 kubelet[1726]: E1031 00:52:47.571475 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:47.571706 kubelet[1726]: E1031 00:52:47.571541 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:47.571706 kubelet[1726]: E1031 00:52:47.571563 1726 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:52:47.571706 kubelet[1726]: E1031 00:52:47.571611 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:47.571706 kubelet[1726]: E1031 00:52:47.571662 1726 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:47.759784 kubelet[1726]: E1031 00:52:47.759745 1726 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 00:52:47.849331 kubelet[1726]: I1031 00:52:47.849212 1726 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:52:47.849518 kubelet[1726]: E1031 00:52:47.849500 1726 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:52:47.880307 kubelet[1726]: E1031 00:52:47.880250 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:47.980822 kubelet[1726]: E1031 00:52:47.980784 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:48.081865 kubelet[1726]: E1031 00:52:48.081821 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:48.182492 kubelet[1726]: E1031 00:52:48.182375 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:48.282983 kubelet[1726]: E1031 00:52:48.282931 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:48.383810 kubelet[1726]: E1031 00:52:48.383757 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:48.484687 kubelet[1726]: E1031 00:52:48.484639 1726 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:48.510520 kubelet[1726]: I1031 00:52:48.510487 1726 apiserver.go:52] "Watching apiserver" Oct 31 00:52:48.538653 kubelet[1726]: I1031 00:52:48.538609 1726 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:52:48.538653 kubelet[1726]: I1031 00:52:48.538616 1726 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:48.546626 kubelet[1726]: E1031 00:52:48.546592 1726 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:48.546626 kubelet[1726]: I1031 00:52:48.546622 1726 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:52:48.548625 kubelet[1726]: E1031 00:52:48.548596 1726 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 00:52:48.548718 kubelet[1726]: I1031 00:52:48.548706 1726 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:48.550510 kubelet[1726]: E1031 00:52:48.550483 1726 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:49.938147 systemd[1]: Reloading. Oct 31 00:52:49.994984 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2025-10-31T00:52:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 00:52:49.995013 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2025-10-31T00:52:49Z" level=info msg="torcx already run" Oct 31 00:52:50.055866 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 00:52:50.055889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 00:52:50.072690 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:52:50.147501 systemd[1]: Stopping kubelet.service... Oct 31 00:52:50.172533 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:52:50.172838 systemd[1]: Stopped kubelet.service. Oct 31 00:52:50.174929 systemd[1]: Starting kubelet.service... Oct 31 00:52:50.267535 systemd[1]: Started kubelet.service. Oct 31 00:52:50.304267 kubelet[2081]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:52:50.304631 kubelet[2081]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:52:50.304678 kubelet[2081]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:52:50.304817 kubelet[2081]: I1031 00:52:50.304784 2081 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:52:50.311769 kubelet[2081]: I1031 00:52:50.311724 2081 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:52:50.311769 kubelet[2081]: I1031 00:52:50.311754 2081 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:52:50.312191 kubelet[2081]: I1031 00:52:50.312171 2081 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:52:50.314122 kubelet[2081]: I1031 00:52:50.314095 2081 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 00:52:50.316939 kubelet[2081]: I1031 00:52:50.316894 2081 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:52:50.320480 kubelet[2081]: E1031 00:52:50.320451 2081 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:52:50.320605 kubelet[2081]: I1031 00:52:50.320589 2081 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:52:50.324245 kubelet[2081]: I1031 00:52:50.324192 2081 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:52:50.324890 kubelet[2081]: I1031 00:52:50.324854 2081 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:52:50.325279 kubelet[2081]: I1031 00:52:50.324961 2081 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 00:52:50.325775 kubelet[2081]: I1031 00:52:50.325732 2081 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:52:50.325855 kubelet[2081]: I1031 00:52:50.325844 2081 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:52:50.326052 kubelet[2081]: I1031 00:52:50.326036 2081 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:52:50.326287 kubelet[2081]: I1031 00:52:50.326272 2081 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:52:50.326387 kubelet[2081]: I1031 00:52:50.326373 2081 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:52:50.326459 kubelet[2081]: I1031 00:52:50.326449 2081 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:52:50.326516 kubelet[2081]: I1031 00:52:50.326506 2081 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:52:50.336065 kubelet[2081]: I1031 00:52:50.335965 2081 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 00:52:50.336639 kubelet[2081]: I1031 00:52:50.336617 2081 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:52:50.343942 kubelet[2081]: I1031 00:52:50.343912 2081 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:52:50.343942 kubelet[2081]: I1031 00:52:50.343951 2081 server.go:1287] "Started kubelet" Oct 31 00:52:50.345869 kubelet[2081]: I1031 00:52:50.345837 2081 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:52:50.346092 kubelet[2081]: I1031 00:52:50.346056 2081 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:52:50.346282 kubelet[2081]: I1031 00:52:50.346255 2081 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:52:50.346393 kubelet[2081]: E1031 00:52:50.346374 2081 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:52:50.346574 kubelet[2081]: I1031 00:52:50.346552 2081 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:52:50.346694 kubelet[2081]: I1031 00:52:50.346677 2081 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:52:50.347463 kubelet[2081]: I1031 00:52:50.347350 2081 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:52:50.348165 kubelet[2081]: I1031 00:52:50.348104 2081 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:52:50.351228 kubelet[2081]: I1031 00:52:50.351206 2081 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:52:50.351550 kubelet[2081]: I1031 00:52:50.351528 2081 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:52:50.353645 kubelet[2081]: I1031 00:52:50.353602 2081 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:52:50.362109 kubelet[2081]: I1031 00:52:50.361527 2081 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:52:50.362109 kubelet[2081]: I1031 00:52:50.361550 2081 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:52:50.369261 kubelet[2081]: I1031 00:52:50.369219 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:52:50.370714 kubelet[2081]: I1031 00:52:50.370685 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:52:50.370714 kubelet[2081]: I1031 00:52:50.370710 2081 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:52:50.370780 kubelet[2081]: I1031 00:52:50.370727 2081 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:52:50.370780 kubelet[2081]: I1031 00:52:50.370735 2081 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:52:50.370841 kubelet[2081]: E1031 00:52:50.370776 2081 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:52:50.372704 kubelet[2081]: E1031 00:52:50.372628 2081 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:52:50.409653 kubelet[2081]: I1031 00:52:50.409627 2081 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:52:50.409653 kubelet[2081]: I1031 00:52:50.409645 2081 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:52:50.409653 kubelet[2081]: I1031 00:52:50.409666 2081 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:52:50.409850 kubelet[2081]: I1031 00:52:50.409833 2081 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:52:50.409901 kubelet[2081]: I1031 00:52:50.409849 2081 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:52:50.409901 kubelet[2081]: I1031 00:52:50.409867 2081 policy_none.go:49] "None policy: Start" Oct 31 00:52:50.409901 kubelet[2081]: I1031 00:52:50.409875 2081 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:52:50.409901 kubelet[2081]: I1031 00:52:50.409884 2081 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:52:50.409985 kubelet[2081]: I1031 00:52:50.409978 2081 state_mem.go:75] "Updated machine memory state" Oct 31 00:52:50.411140 kubelet[2081]: I1031 00:52:50.411107 2081 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:52:50.411295 kubelet[2081]: I1031 00:52:50.411280 2081 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:52:50.411339 kubelet[2081]: I1031 00:52:50.411297 2081 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:52:50.412183 kubelet[2081]: I1031 00:52:50.412169 2081 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:52:50.413872 kubelet[2081]: E1031 00:52:50.413770 2081 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:52:50.471423 kubelet[2081]: I1031 00:52:50.471362 2081 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:50.471589 kubelet[2081]: I1031 00:52:50.471438 2081 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:52:50.471688 kubelet[2081]: I1031 00:52:50.471652 2081 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:50.516388 kubelet[2081]: I1031 00:52:50.516345 2081 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:52:50.524859 kubelet[2081]: I1031 00:52:50.524734 2081 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:52:50.524859 kubelet[2081]: I1031 00:52:50.524841 2081 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:52:50.547738 kubelet[2081]: I1031 00:52:50.547679 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:50.547738 kubelet[2081]: I1031 00:52:50.547719 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:50.547738 kubelet[2081]: I1031 00:52:50.547748 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:52:50.547933 kubelet[2081]: I1031 00:52:50.547765 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a4e8400b2499f6ba1f36d7aeed3c36c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4e8400b2499f6ba1f36d7aeed3c36c\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:50.547933 kubelet[2081]: I1031 00:52:50.547783 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a4e8400b2499f6ba1f36d7aeed3c36c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a4e8400b2499f6ba1f36d7aeed3c36c\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:50.547933 kubelet[2081]: I1031 00:52:50.547799 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:50.547933 kubelet[2081]: I1031 00:52:50.547814 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:50.547933 kubelet[2081]: I1031 00:52:50.547830 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:52:50.548075 kubelet[2081]: I1031 00:52:50.547845 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a4e8400b2499f6ba1f36d7aeed3c36c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a4e8400b2499f6ba1f36d7aeed3c36c\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:52:50.781077 kubelet[2081]: E1031 00:52:50.780921 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:50.781077 kubelet[2081]: E1031 00:52:50.780981 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:50.781077 kubelet[2081]: E1031 00:52:50.780995 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:50.935822 sudo[2118]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 31 00:52:50.936060 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 31 00:52:51.327504 kubelet[2081]: I1031 00:52:51.327430 2081 apiserver.go:52] "Watching apiserver" Oct 31 00:52:51.346898 kubelet[2081]: I1031 00:52:51.346847 2081 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:52:51.368574 sudo[2118]: pam_unix(sudo:session): session closed for user root Oct 31 00:52:51.374508 kubelet[2081]: I1031 00:52:51.373898 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.373852665 podStartE2EDuration="1.373852665s" podCreationTimestamp="2025-10-31 00:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:52:51.370840985 +0000 UTC m=+1.099049521" watchObservedRunningTime="2025-10-31 00:52:51.373852665 +0000 UTC m=+1.102061201" Oct 31 00:52:51.380143 kubelet[2081]: I1031 00:52:51.380081 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.380067905 podStartE2EDuration="1.380067905s" podCreationTimestamp="2025-10-31 00:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:52:51.379904025 +0000 UTC m=+1.108112561" watchObservedRunningTime="2025-10-31 00:52:51.380067905 +0000 UTC m=+1.108276441" Oct 31 00:52:51.384692 kubelet[2081]: E1031 00:52:51.384664 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:51.385743 kubelet[2081]: E1031 00:52:51.385414 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:51.385743 kubelet[2081]: E1031 00:52:51.385700 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:51.403560 kubelet[2081]: I1031 00:52:51.400091 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.400074305 podStartE2EDuration="1.400074305s" podCreationTimestamp="2025-10-31 00:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:52:51.389388785 +0000 UTC m=+1.117597321" watchObservedRunningTime="2025-10-31 00:52:51.400074305 +0000 UTC m=+1.128282841" Oct 31 00:52:52.385726 kubelet[2081]: E1031 00:52:52.385694 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:52.386113 kubelet[2081]: E1031 00:52:52.385782 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:53.387865 kubelet[2081]: E1031 00:52:53.387816 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:54.214768 sudo[1442]: pam_unix(sudo:session): session closed for user root Oct 31 00:52:54.216737 sshd[1438]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:54.219505 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:55340.service: Deactivated successfully. Oct 31 00:52:54.220882 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:52:54.221548 systemd-logind[1304]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:52:54.222462 systemd-logind[1304]: Removed session 5. Oct 31 00:52:54.436363 kubelet[2081]: E1031 00:52:54.436330 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:56.780141 kubelet[2081]: I1031 00:52:56.780108 2081 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:52:56.780491 env[1321]: time="2025-10-31T00:52:56.780419028Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:52:56.780658 kubelet[2081]: I1031 00:52:56.780585 2081 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:52:57.701037 kubelet[2081]: I1031 00:52:57.700986 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hostproc\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701037 kubelet[2081]: I1031 00:52:57.701035 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hubble-tls\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701247 kubelet[2081]: I1031 00:52:57.701059 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98683940-e22a-4f1a-a56f-afb6f918d405-xtables-lock\") pod \"kube-proxy-n2f5t\" (UID: \"98683940-e22a-4f1a-a56f-afb6f918d405\") " pod="kube-system/kube-proxy-n2f5t" Oct 31 00:52:57.701247 kubelet[2081]: I1031 00:52:57.701077 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-cgroup\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701247 kubelet[2081]: I1031 00:52:57.701097 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98683940-e22a-4f1a-a56f-afb6f918d405-kube-proxy\") pod \"kube-proxy-n2f5t\" (UID: \"98683940-e22a-4f1a-a56f-afb6f918d405\") " pod="kube-system/kube-proxy-n2f5t" Oct 31 00:52:57.701247 kubelet[2081]: I1031 00:52:57.701114 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-xtables-lock\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701247 kubelet[2081]: I1031 00:52:57.701136 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-config-path\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701363 kubelet[2081]: I1031 00:52:57.701163 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftldr\" (UniqueName: \"kubernetes.io/projected/98683940-e22a-4f1a-a56f-afb6f918d405-kube-api-access-ftldr\") pod \"kube-proxy-n2f5t\" (UID: \"98683940-e22a-4f1a-a56f-afb6f918d405\") " pod="kube-system/kube-proxy-n2f5t" Oct 31 00:52:57.701363 kubelet[2081]: I1031 00:52:57.701187 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-kernel\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701363 kubelet[2081]: I1031 00:52:57.701203 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcvkj\" (UniqueName: \"kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-kube-api-access-lcvkj\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701363 kubelet[2081]: I1031 00:52:57.701218 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98683940-e22a-4f1a-a56f-afb6f918d405-lib-modules\") pod \"kube-proxy-n2f5t\" (UID: \"98683940-e22a-4f1a-a56f-afb6f918d405\") " pod="kube-system/kube-proxy-n2f5t" Oct 31 00:52:57.701363 kubelet[2081]: I1031 00:52:57.701235 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-run\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701486 kubelet[2081]: I1031 00:52:57.701251 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-bpf-maps\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701486 kubelet[2081]: I1031 00:52:57.701267 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-lib-modules\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701486 kubelet[2081]: I1031 00:52:57.701284 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-net\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701486 kubelet[2081]: I1031 00:52:57.701299 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cni-path\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701486 kubelet[2081]: I1031 00:52:57.701314 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-etc-cni-netd\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.701486 kubelet[2081]: I1031 00:52:57.701331 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-clustermesh-secrets\") pod \"cilium-s8g7g\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " pod="kube-system/cilium-s8g7g" Oct 31 00:52:57.802466 kubelet[2081]: I1031 00:52:57.802428 2081 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 31 00:52:57.902912 kubelet[2081]: I1031 00:52:57.902316 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djfbs\" (UniqueName: \"kubernetes.io/projected/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-kube-api-access-djfbs\") pod \"cilium-operator-6c4d7847fc-bnp2t\" (UID: \"5cdd9499-c2b0-4ffa-ae7d-5b76550b065a\") " pod="kube-system/cilium-operator-6c4d7847fc-bnp2t" Oct 31 00:52:57.902912 kubelet[2081]: I1031 00:52:57.902364 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bnp2t\" (UID: \"5cdd9499-c2b0-4ffa-ae7d-5b76550b065a\") " pod="kube-system/cilium-operator-6c4d7847fc-bnp2t" Oct 31 00:52:57.945255 kubelet[2081]: E1031 00:52:57.945221 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:57.946418 env[1321]: time="2025-10-31T00:52:57.946347662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2f5t,Uid:98683940-e22a-4f1a-a56f-afb6f918d405,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:57.956195 kubelet[2081]: E1031 00:52:57.955769 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:57.957522 env[1321]: time="2025-10-31T00:52:57.957465110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8g7g,Uid:d6153ec7-32f9-44ec-b0dc-0c7fd399491a,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:57.971830 env[1321]: time="2025-10-31T00:52:57.971758441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:57.971974 env[1321]: time="2025-10-31T00:52:57.971810559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:57.971974 env[1321]: time="2025-10-31T00:52:57.971821599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:57.972057 env[1321]: time="2025-10-31T00:52:57.971986275Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de035644eaf2f9a72a09724f23053201d710b46e169181e1787d01eedf324c5a pid=2178 runtime=io.containerd.runc.v2 Oct 31 00:52:57.983683 env[1321]: time="2025-10-31T00:52:57.983609151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:57.983859 env[1321]: time="2025-10-31T00:52:57.983820986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:57.983859 env[1321]: time="2025-10-31T00:52:57.983844425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:57.984145 env[1321]: time="2025-10-31T00:52:57.984106699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a pid=2198 runtime=io.containerd.runc.v2 Oct 31 00:52:58.020813 env[1321]: time="2025-10-31T00:52:58.020742434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2f5t,Uid:98683940-e22a-4f1a-a56f-afb6f918d405,Namespace:kube-system,Attempt:0,} returns sandbox id \"de035644eaf2f9a72a09724f23053201d710b46e169181e1787d01eedf324c5a\"" Oct 31 00:52:58.022148 kubelet[2081]: E1031 00:52:58.021574 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:58.026602 env[1321]: time="2025-10-31T00:52:58.026183109Z" level=info msg="CreateContainer within sandbox \"de035644eaf2f9a72a09724f23053201d710b46e169181e1787d01eedf324c5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:52:58.038993 env[1321]: time="2025-10-31T00:52:58.038950577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8g7g,Uid:d6153ec7-32f9-44ec-b0dc-0c7fd399491a,Namespace:kube-system,Attempt:0,} returns sandbox id \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\"" Oct 31 00:52:58.040165 kubelet[2081]: E1031 00:52:58.039681 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:58.041343 env[1321]: time="2025-10-31T00:52:58.040588419Z" level=info msg="CreateContainer within sandbox \"de035644eaf2f9a72a09724f23053201d710b46e169181e1787d01eedf324c5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f98ac6d572349d3aaaadf06526f2e620afa35a4a99ce6758c4abbe1c9832d06\"" Oct 31 00:52:58.041979 env[1321]: time="2025-10-31T00:52:58.041757192Z" level=info msg="StartContainer for \"4f98ac6d572349d3aaaadf06526f2e620afa35a4a99ce6758c4abbe1c9832d06\"" Oct 31 00:52:58.042327 env[1321]: time="2025-10-31T00:52:58.042273821Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 31 00:52:58.093871 env[1321]: time="2025-10-31T00:52:58.093824720Z" level=info msg="StartContainer for \"4f98ac6d572349d3aaaadf06526f2e620afa35a4a99ce6758c4abbe1c9832d06\" returns successfully" Oct 31 00:52:58.171581 kubelet[2081]: E1031 00:52:58.171265 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:58.172138 env[1321]: time="2025-10-31T00:52:58.172106207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bnp2t,Uid:5cdd9499-c2b0-4ffa-ae7d-5b76550b065a,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:58.187655 env[1321]: time="2025-10-31T00:52:58.187556213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:58.187655 env[1321]: time="2025-10-31T00:52:58.187602652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:58.187655 env[1321]: time="2025-10-31T00:52:58.187612812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:58.188109 env[1321]: time="2025-10-31T00:52:58.188074841Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8 pid=2302 runtime=io.containerd.runc.v2 Oct 31 00:52:58.239521 env[1321]: time="2025-10-31T00:52:58.239415225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bnp2t,Uid:5cdd9499-c2b0-4ffa-ae7d-5b76550b065a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8\"" Oct 31 00:52:58.240888 kubelet[2081]: E1031 00:52:58.240842 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:58.397597 kubelet[2081]: E1031 00:52:58.397569 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:59.794695 kubelet[2081]: E1031 00:52:59.793856 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:52:59.814512 kubelet[2081]: I1031 00:52:59.811250 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n2f5t" podStartSLOduration=2.811229062 podStartE2EDuration="2.811229062s" podCreationTimestamp="2025-10-31 00:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:52:58.407004466 +0000 UTC m=+8.135213002" watchObservedRunningTime="2025-10-31 00:52:59.811229062 +0000 UTC m=+9.539437598" Oct 31 00:53:00.400885 kubelet[2081]: E1031 00:53:00.400841 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:02.545342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051651629.mount: Deactivated successfully. Oct 31 00:53:03.166484 kubelet[2081]: E1031 00:53:03.166450 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:03.406750 kubelet[2081]: E1031 00:53:03.406713 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:04.448038 kubelet[2081]: E1031 00:53:04.448009 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:04.756885 env[1321]: time="2025-10-31T00:53:04.756628109Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:53:04.758212 env[1321]: time="2025-10-31T00:53:04.758181965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:53:04.759770 env[1321]: time="2025-10-31T00:53:04.759734541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:53:04.760406 env[1321]: time="2025-10-31T00:53:04.760378371Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 31 00:53:04.764651 env[1321]: time="2025-10-31T00:53:04.764606425Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 31 00:53:04.768599 env[1321]: time="2025-10-31T00:53:04.768539204Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 31 00:53:04.779252 env[1321]: time="2025-10-31T00:53:04.779024601Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\"" Oct 31 00:53:04.779667 env[1321]: time="2025-10-31T00:53:04.779635311Z" level=info msg="StartContainer for \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\"" Oct 31 00:53:04.959316 env[1321]: time="2025-10-31T00:53:04.959210319Z" level=info msg="StartContainer for \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\" returns successfully" Oct 31 00:53:05.024036 env[1321]: time="2025-10-31T00:53:05.023686698Z" level=info msg="shim disconnected" id=a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a Oct 31 00:53:05.024036 env[1321]: time="2025-10-31T00:53:05.023730097Z" level=warning msg="cleaning up after shim disconnected" id=a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a namespace=k8s.io Oct 31 00:53:05.024036 env[1321]: time="2025-10-31T00:53:05.023739897Z" level=info msg="cleaning up dead shim" Oct 31 00:53:05.030439 env[1321]: time="2025-10-31T00:53:05.030363401Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:53:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2508 runtime=io.containerd.runc.v2\n" Oct 31 00:53:05.408410 update_engine[1306]: I1031 00:53:05.408140 1306 update_attempter.cc:509] Updating boot flags... Oct 31 00:53:05.414480 kubelet[2081]: E1031 00:53:05.414455 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:05.418806 env[1321]: time="2025-10-31T00:53:05.418767658Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 31 00:53:05.432196 env[1321]: time="2025-10-31T00:53:05.430875562Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\"" Oct 31 00:53:05.438484 env[1321]: time="2025-10-31T00:53:05.438430172Z" level=info msg="StartContainer for \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\"" Oct 31 00:53:05.494370 env[1321]: time="2025-10-31T00:53:05.494326477Z" level=info msg="StartContainer for \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\" returns successfully" Oct 31 00:53:05.504923 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:53:05.505231 systemd[1]: Stopped systemd-sysctl.service. Oct 31 00:53:05.505402 systemd[1]: Stopping systemd-sysctl.service... Oct 31 00:53:05.506928 systemd[1]: Starting systemd-sysctl.service... Oct 31 00:53:05.517644 systemd[1]: Finished systemd-sysctl.service. Oct 31 00:53:05.531722 env[1321]: time="2025-10-31T00:53:05.531670252Z" level=info msg="shim disconnected" id=ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9 Oct 31 00:53:05.531954 env[1321]: time="2025-10-31T00:53:05.531937288Z" level=warning msg="cleaning up after shim disconnected" id=ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9 namespace=k8s.io Oct 31 00:53:05.532028 env[1321]: time="2025-10-31T00:53:05.532013607Z" level=info msg="cleaning up dead shim" Oct 31 00:53:05.538665 env[1321]: time="2025-10-31T00:53:05.538623111Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:53:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2587 runtime=io.containerd.runc.v2\n" Oct 31 00:53:05.778094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a-rootfs.mount: Deactivated successfully. Oct 31 00:53:06.085307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580705975.mount: Deactivated successfully. Oct 31 00:53:06.417959 kubelet[2081]: E1031 00:53:06.417740 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:06.420190 env[1321]: time="2025-10-31T00:53:06.420099521Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 31 00:53:06.444210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816599280.mount: Deactivated successfully. Oct 31 00:53:06.447634 env[1321]: time="2025-10-31T00:53:06.447589066Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\"" Oct 31 00:53:06.450160 env[1321]: time="2025-10-31T00:53:06.449758836Z" level=info msg="StartContainer for \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\"" Oct 31 00:53:06.503206 env[1321]: time="2025-10-31T00:53:06.503157746Z" level=info msg="StartContainer for \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\" returns successfully" Oct 31 00:53:06.543793 env[1321]: time="2025-10-31T00:53:06.543747311Z" level=info msg="shim disconnected" id=2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5 Oct 31 00:53:06.543793 env[1321]: time="2025-10-31T00:53:06.543795431Z" level=warning msg="cleaning up after shim disconnected" id=2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5 namespace=k8s.io Oct 31 00:53:06.544004 env[1321]: time="2025-10-31T00:53:06.543807031Z" level=info msg="cleaning up dead shim" Oct 31 00:53:06.553465 env[1321]: time="2025-10-31T00:53:06.553423539Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:53:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2644 runtime=io.containerd.runc.v2\n" Oct 31 00:53:06.904406 env[1321]: time="2025-10-31T00:53:06.904348343Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:53:06.905912 env[1321]: time="2025-10-31T00:53:06.905886442Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:53:06.907255 env[1321]: time="2025-10-31T00:53:06.907217264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 00:53:06.907772 env[1321]: time="2025-10-31T00:53:06.907727297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 31 00:53:06.911800 env[1321]: time="2025-10-31T00:53:06.911354807Z" level=info msg="CreateContainer within sandbox \"2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 31 00:53:06.925086 env[1321]: time="2025-10-31T00:53:06.925039740Z" level=info msg="CreateContainer within sandbox \"2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\"" Oct 31 00:53:06.926786 env[1321]: time="2025-10-31T00:53:06.925847489Z" level=info msg="StartContainer for \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\"" Oct 31 00:53:06.983626 env[1321]: time="2025-10-31T00:53:06.983550460Z" level=info msg="StartContainer for \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\" returns successfully" Oct 31 00:53:07.420589 kubelet[2081]: E1031 00:53:07.420546 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:07.424091 kubelet[2081]: E1031 00:53:07.424042 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:07.426222 env[1321]: time="2025-10-31T00:53:07.426172614Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 31 00:53:07.444027 env[1321]: time="2025-10-31T00:53:07.443954946Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\"" Oct 31 00:53:07.445325 env[1321]: time="2025-10-31T00:53:07.445287009Z" level=info msg="StartContainer for \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\"" Oct 31 00:53:07.462394 kubelet[2081]: I1031 00:53:07.461267 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bnp2t" podStartSLOduration=1.793701862 podStartE2EDuration="10.461246884s" podCreationTimestamp="2025-10-31 00:52:57 +0000 UTC" firstStartedPulling="2025-10-31 00:52:58.241499977 +0000 UTC m=+7.969708513" lastFinishedPulling="2025-10-31 00:53:06.909044999 +0000 UTC m=+16.637253535" observedRunningTime="2025-10-31 00:53:07.438468816 +0000 UTC m=+17.166677312" watchObservedRunningTime="2025-10-31 00:53:07.461246884 +0000 UTC m=+17.189455460" Oct 31 00:53:07.533041 env[1321]: time="2025-10-31T00:53:07.532979045Z" level=info msg="StartContainer for \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\" returns successfully" Oct 31 00:53:07.631695 env[1321]: time="2025-10-31T00:53:07.631650061Z" level=info msg="shim disconnected" id=83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2 Oct 31 00:53:07.631929 env[1321]: time="2025-10-31T00:53:07.631911537Z" level=warning msg="cleaning up after shim disconnected" id=83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2 namespace=k8s.io Oct 31 00:53:07.631990 env[1321]: time="2025-10-31T00:53:07.631978097Z" level=info msg="cleaning up dead shim" Oct 31 00:53:07.639580 env[1321]: time="2025-10-31T00:53:07.639530800Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:53:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2737 runtime=io.containerd.runc.v2\n" Oct 31 00:53:08.427890 kubelet[2081]: E1031 00:53:08.427847 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:08.428314 kubelet[2081]: E1031 00:53:08.427866 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:08.430166 env[1321]: time="2025-10-31T00:53:08.430115133Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 31 00:53:08.447355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345004618.mount: Deactivated successfully. Oct 31 00:53:08.455363 env[1321]: time="2025-10-31T00:53:08.455293470Z" level=info msg="CreateContainer within sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\"" Oct 31 00:53:08.455936 env[1321]: time="2025-10-31T00:53:08.455906543Z" level=info msg="StartContainer for \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\"" Oct 31 00:53:08.511658 env[1321]: time="2025-10-31T00:53:08.511614634Z" level=info msg="StartContainer for \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\" returns successfully" Oct 31 00:53:08.598822 kubelet[2081]: I1031 00:53:08.598782 2081 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 00:53:08.676194 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Oct 31 00:53:08.681666 kubelet[2081]: I1031 00:53:08.681565 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fabe2be-8c76-42c0-ba8f-f0f1a237f108-config-volume\") pod \"coredns-668d6bf9bc-8tlzc\" (UID: \"1fabe2be-8c76-42c0-ba8f-f0f1a237f108\") " pod="kube-system/coredns-668d6bf9bc-8tlzc" Oct 31 00:53:08.681666 kubelet[2081]: I1031 00:53:08.681619 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7bd39a3-7599-4c5a-92f1-7871f1dac6a4-config-volume\") pod \"coredns-668d6bf9bc-g7xqd\" (UID: \"e7bd39a3-7599-4c5a-92f1-7871f1dac6a4\") " pod="kube-system/coredns-668d6bf9bc-g7xqd" Oct 31 00:53:08.681666 kubelet[2081]: I1031 00:53:08.681641 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szlv2\" (UniqueName: \"kubernetes.io/projected/e7bd39a3-7599-4c5a-92f1-7871f1dac6a4-kube-api-access-szlv2\") pod \"coredns-668d6bf9bc-g7xqd\" (UID: \"e7bd39a3-7599-4c5a-92f1-7871f1dac6a4\") " pod="kube-system/coredns-668d6bf9bc-g7xqd" Oct 31 00:53:08.681666 kubelet[2081]: I1031 00:53:08.681663 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpcxl\" (UniqueName: \"kubernetes.io/projected/1fabe2be-8c76-42c0-ba8f-f0f1a237f108-kube-api-access-mpcxl\") pod \"coredns-668d6bf9bc-8tlzc\" (UID: \"1fabe2be-8c76-42c0-ba8f-f0f1a237f108\") " pod="kube-system/coredns-668d6bf9bc-8tlzc" Oct 31 00:53:08.909186 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Oct 31 00:53:08.939252 kubelet[2081]: E1031 00:53:08.938558 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:08.940692 env[1321]: time="2025-10-31T00:53:08.939600812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8tlzc,Uid:1fabe2be-8c76-42c0-ba8f-f0f1a237f108,Namespace:kube-system,Attempt:0,}" Oct 31 00:53:08.946777 kubelet[2081]: E1031 00:53:08.946745 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:08.947427 env[1321]: time="2025-10-31T00:53:08.947392639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g7xqd,Uid:e7bd39a3-7599-4c5a-92f1-7871f1dac6a4,Namespace:kube-system,Attempt:0,}" Oct 31 00:53:09.431973 kubelet[2081]: E1031 00:53:09.431940 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:09.459655 kubelet[2081]: I1031 00:53:09.459485 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s8g7g" podStartSLOduration=5.736820557 podStartE2EDuration="12.459468472s" podCreationTimestamp="2025-10-31 00:52:57 +0000 UTC" firstStartedPulling="2025-10-31 00:52:58.041664875 +0000 UTC m=+7.769873411" lastFinishedPulling="2025-10-31 00:53:04.76431279 +0000 UTC m=+14.492521326" observedRunningTime="2025-10-31 00:53:09.45431061 +0000 UTC m=+19.182519146" watchObservedRunningTime="2025-10-31 00:53:09.459468472 +0000 UTC m=+19.187677008" Oct 31 00:53:10.433923 kubelet[2081]: E1031 00:53:10.433837 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:10.531381 systemd-networkd[1097]: cilium_host: Link UP Oct 31 00:53:10.533177 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Oct 31 00:53:10.533252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Oct 31 00:53:10.534078 systemd-networkd[1097]: cilium_net: Link UP Oct 31 00:53:10.534288 systemd-networkd[1097]: cilium_net: Gained carrier Oct 31 00:53:10.534455 systemd-networkd[1097]: cilium_host: Gained carrier Oct 31 00:53:10.622403 systemd-networkd[1097]: cilium_vxlan: Link UP Oct 31 00:53:10.622409 systemd-networkd[1097]: cilium_vxlan: Gained carrier Oct 31 00:53:10.756541 systemd-networkd[1097]: cilium_net: Gained IPv6LL Oct 31 00:53:10.764431 systemd-networkd[1097]: cilium_host: Gained IPv6LL Oct 31 00:53:10.890186 kernel: NET: Registered PF_ALG protocol family Oct 31 00:53:11.435259 kubelet[2081]: E1031 00:53:11.435228 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:11.510274 systemd-networkd[1097]: lxc_health: Link UP Oct 31 00:53:11.527183 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 31 00:53:11.527672 systemd-networkd[1097]: lxc_health: Gained carrier Oct 31 00:53:11.993396 systemd-networkd[1097]: lxcf262cb6fcd01: Link UP Oct 31 00:53:12.005191 kernel: eth0: renamed from tmpd6ee7 Oct 31 00:53:12.012918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 00:53:12.013013 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf262cb6fcd01: link becomes ready Oct 31 00:53:12.012275 systemd-networkd[1097]: lxcf262cb6fcd01: Gained carrier Oct 31 00:53:12.020122 systemd-networkd[1097]: lxc072ab8d7e230: Link UP Oct 31 00:53:12.026192 kernel: eth0: renamed from tmp9a44e Oct 31 00:53:12.036417 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc072ab8d7e230: link becomes ready Oct 31 00:53:12.036004 systemd-networkd[1097]: lxc072ab8d7e230: Gained carrier Oct 31 00:53:12.292626 systemd-networkd[1097]: cilium_vxlan: Gained IPv6LL Oct 31 00:53:12.436681 kubelet[2081]: E1031 00:53:12.436634 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:13.189649 systemd-networkd[1097]: lxc_health: Gained IPv6LL Oct 31 00:53:13.316539 systemd-networkd[1097]: lxcf262cb6fcd01: Gained IPv6LL Oct 31 00:53:13.380607 systemd-networkd[1097]: lxc072ab8d7e230: Gained IPv6LL Oct 31 00:53:15.608755 env[1321]: time="2025-10-31T00:53:15.608670935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:53:15.609440 env[1321]: time="2025-10-31T00:53:15.609384929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:53:15.609440 env[1321]: time="2025-10-31T00:53:15.609408169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:53:15.609762 env[1321]: time="2025-10-31T00:53:15.609716167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a44eab74fff9d5f6f8f363e2132e111f1d9a51d45ac6134edc0856457324983 pid=3310 runtime=io.containerd.runc.v2 Oct 31 00:53:15.610293 env[1321]: time="2025-10-31T00:53:15.610101204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:53:15.610293 env[1321]: time="2025-10-31T00:53:15.610141204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:53:15.610293 env[1321]: time="2025-10-31T00:53:15.610160923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:53:15.610495 env[1321]: time="2025-10-31T00:53:15.610336042Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6ee7623bd6e383e3d536bdee00485a324878f2641070dfeaa075c71f5804468 pid=3323 runtime=io.containerd.runc.v2 Oct 31 00:53:15.646318 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:53:15.648799 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:53:15.667494 env[1321]: time="2025-10-31T00:53:15.667444085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g7xqd,Uid:e7bd39a3-7599-4c5a-92f1-7871f1dac6a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a44eab74fff9d5f6f8f363e2132e111f1d9a51d45ac6134edc0856457324983\"" Oct 31 00:53:15.668818 kubelet[2081]: E1031 00:53:15.668792 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:15.669210 env[1321]: time="2025-10-31T00:53:15.669175752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8tlzc,Uid:1fabe2be-8c76-42c0-ba8f-f0f1a237f108,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ee7623bd6e383e3d536bdee00485a324878f2641070dfeaa075c71f5804468\"" Oct 31 00:53:15.669872 kubelet[2081]: E1031 00:53:15.669731 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:15.670840 env[1321]: time="2025-10-31T00:53:15.670808740Z" level=info msg="CreateContainer within sandbox \"9a44eab74fff9d5f6f8f363e2132e111f1d9a51d45ac6134edc0856457324983\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:53:15.675308 env[1321]: time="2025-10-31T00:53:15.675198066Z" level=info msg="CreateContainer within sandbox \"d6ee7623bd6e383e3d536bdee00485a324878f2641070dfeaa075c71f5804468\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:53:15.687100 env[1321]: time="2025-10-31T00:53:15.687061055Z" level=info msg="CreateContainer within sandbox \"9a44eab74fff9d5f6f8f363e2132e111f1d9a51d45ac6134edc0856457324983\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9c805ec6fa61e4bd3207e4dde779d1cca1eb6557e51d408dd3a15889a9a0e10\"" Oct 31 00:53:15.688353 env[1321]: time="2025-10-31T00:53:15.688322086Z" level=info msg="StartContainer for \"a9c805ec6fa61e4bd3207e4dde779d1cca1eb6557e51d408dd3a15889a9a0e10\"" Oct 31 00:53:15.690105 env[1321]: time="2025-10-31T00:53:15.690071832Z" level=info msg="CreateContainer within sandbox \"d6ee7623bd6e383e3d536bdee00485a324878f2641070dfeaa075c71f5804468\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"861dbf3e839cd881c83900813aef43dcd0fb85262636da6197683d6e00badca5\"" Oct 31 00:53:15.690493 env[1321]: time="2025-10-31T00:53:15.690446590Z" level=info msg="StartContainer for \"861dbf3e839cd881c83900813aef43dcd0fb85262636da6197683d6e00badca5\"" Oct 31 00:53:15.743348 env[1321]: time="2025-10-31T00:53:15.743236026Z" level=info msg="StartContainer for \"861dbf3e839cd881c83900813aef43dcd0fb85262636da6197683d6e00badca5\" returns successfully" Oct 31 00:53:15.744000 env[1321]: time="2025-10-31T00:53:15.743771742Z" level=info msg="StartContainer for \"a9c805ec6fa61e4bd3207e4dde779d1cca1eb6557e51d408dd3a15889a9a0e10\" returns successfully" Oct 31 00:53:16.445431 kubelet[2081]: E1031 00:53:16.445389 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:16.447423 kubelet[2081]: E1031 00:53:16.447390 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:16.457908 kubelet[2081]: I1031 00:53:16.457861 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g7xqd" podStartSLOduration=19.45784854 podStartE2EDuration="19.45784854s" podCreationTimestamp="2025-10-31 00:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:53:16.457497903 +0000 UTC m=+26.185706439" watchObservedRunningTime="2025-10-31 00:53:16.45784854 +0000 UTC m=+26.186057076" Oct 31 00:53:16.471734 kubelet[2081]: I1031 00:53:16.471665 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8tlzc" podStartSLOduration=19.471645201 podStartE2EDuration="19.471645201s" podCreationTimestamp="2025-10-31 00:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:53:16.471267924 +0000 UTC m=+26.199476500" watchObservedRunningTime="2025-10-31 00:53:16.471645201 +0000 UTC m=+26.199853737" Oct 31 00:53:16.614127 systemd[1]: run-containerd-runc-k8s.io-9a44eab74fff9d5f6f8f363e2132e111f1d9a51d45ac6134edc0856457324983-runc.duYtSN.mount: Deactivated successfully. Oct 31 00:53:17.449453 kubelet[2081]: E1031 00:53:17.449422 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:17.449889 kubelet[2081]: E1031 00:53:17.449473 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:18.450982 kubelet[2081]: E1031 00:53:18.450944 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:18.451472 kubelet[2081]: E1031 00:53:18.451454 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:22.581763 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:46160.service. Oct 31 00:53:22.621799 kubelet[2081]: I1031 00:53:22.619299 2081 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:53:22.621799 kubelet[2081]: E1031 00:53:22.620759 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:22.634370 sshd[3469]: Accepted publickey for core from 10.0.0.1 port 46160 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:22.637167 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:22.653200 systemd[1]: Started session-6.scope. Oct 31 00:53:22.654201 systemd-logind[1304]: New session 6 of user core. Oct 31 00:53:22.788687 sshd[3469]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:22.791135 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:46160.service: Deactivated successfully. Oct 31 00:53:22.792374 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:53:22.792917 systemd-logind[1304]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:53:22.793744 systemd-logind[1304]: Removed session 6. Oct 31 00:53:23.461583 kubelet[2081]: E1031 00:53:23.461529 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:53:27.791786 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:46180.service. Oct 31 00:53:27.836135 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 46180 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:27.837646 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:27.841588 systemd-logind[1304]: New session 7 of user core. Oct 31 00:53:27.842056 systemd[1]: Started session-7.scope. Oct 31 00:53:27.958982 sshd[3484]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:27.961390 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:46180.service: Deactivated successfully. Oct 31 00:53:27.962441 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:53:27.962458 systemd-logind[1304]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:53:27.963417 systemd-logind[1304]: Removed session 7. Oct 31 00:53:32.962231 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:46564.service. Oct 31 00:53:33.010199 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 46564 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:33.012062 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:33.016355 systemd-logind[1304]: New session 8 of user core. Oct 31 00:53:33.016881 systemd[1]: Started session-8.scope. Oct 31 00:53:33.149660 sshd[3501]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:33.152083 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:46564.service: Deactivated successfully. Oct 31 00:53:33.153142 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:53:33.153145 systemd-logind[1304]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:53:33.154127 systemd-logind[1304]: Removed session 8. Oct 31 00:53:38.152361 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:46578.service. Oct 31 00:53:38.210059 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 46578 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:38.212590 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:38.217901 systemd-logind[1304]: New session 9 of user core. Oct 31 00:53:38.218291 systemd[1]: Started session-9.scope. Oct 31 00:53:38.356369 sshd[3517]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:38.359771 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:46598.service. Oct 31 00:53:38.362291 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:46578.service: Deactivated successfully. Oct 31 00:53:38.363461 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:53:38.363514 systemd-logind[1304]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:53:38.364337 systemd-logind[1304]: Removed session 9. Oct 31 00:53:38.405864 sshd[3530]: Accepted publickey for core from 10.0.0.1 port 46598 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:38.407902 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:38.413590 systemd-logind[1304]: New session 10 of user core. Oct 31 00:53:38.414506 systemd[1]: Started session-10.scope. Oct 31 00:53:38.593543 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:46606.service. Oct 31 00:53:38.596084 sshd[3530]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:38.600027 systemd-logind[1304]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:53:38.600237 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:46598.service: Deactivated successfully. Oct 31 00:53:38.601200 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:53:38.601709 systemd-logind[1304]: Removed session 10. Oct 31 00:53:38.649260 sshd[3543]: Accepted publickey for core from 10.0.0.1 port 46606 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:38.650744 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:38.655106 systemd-logind[1304]: New session 11 of user core. Oct 31 00:53:38.655606 systemd[1]: Started session-11.scope. Oct 31 00:53:38.772274 sshd[3543]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:38.774996 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:46606.service: Deactivated successfully. Oct 31 00:53:38.776027 systemd-logind[1304]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:53:38.776028 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:53:38.776970 systemd-logind[1304]: Removed session 11. Oct 31 00:53:43.775684 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:42690.service. Oct 31 00:53:43.831054 sshd[3561]: Accepted publickey for core from 10.0.0.1 port 42690 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:43.832370 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:43.838363 systemd-logind[1304]: New session 12 of user core. Oct 31 00:53:43.839215 systemd[1]: Started session-12.scope. Oct 31 00:53:43.981725 sshd[3561]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:43.985647 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:42690.service: Deactivated successfully. Oct 31 00:53:43.987879 systemd-logind[1304]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:53:43.988552 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:53:43.990035 systemd-logind[1304]: Removed session 12. Oct 31 00:53:48.987842 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:42704.service. Oct 31 00:53:49.032077 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:49.033882 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:49.037959 systemd-logind[1304]: New session 13 of user core. Oct 31 00:53:49.038457 systemd[1]: Started session-13.scope. Oct 31 00:53:49.150250 sshd[3575]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:49.152567 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:42716.service. Oct 31 00:53:49.153124 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:42704.service: Deactivated successfully. Oct 31 00:53:49.154043 systemd-logind[1304]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:53:49.154128 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:53:49.155206 systemd-logind[1304]: Removed session 13. Oct 31 00:53:49.197698 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 42716 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:49.198967 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:49.202604 systemd-logind[1304]: New session 14 of user core. Oct 31 00:53:49.203454 systemd[1]: Started session-14.scope. Oct 31 00:53:49.391261 sshd[3587]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:49.393252 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:60028.service. Oct 31 00:53:49.395059 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:42716.service: Deactivated successfully. Oct 31 00:53:49.395963 systemd-logind[1304]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:53:49.396019 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:53:49.399878 systemd-logind[1304]: Removed session 14. Oct 31 00:53:49.440355 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:49.442243 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:49.445816 systemd-logind[1304]: New session 15 of user core. Oct 31 00:53:49.446743 systemd[1]: Started session-15.scope. Oct 31 00:53:49.971947 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:60034.service. Oct 31 00:53:49.973318 sshd[3599]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:49.977303 systemd-logind[1304]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:53:49.977481 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:60028.service: Deactivated successfully. Oct 31 00:53:49.979647 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:53:49.980500 systemd-logind[1304]: Removed session 15. Oct 31 00:53:50.020516 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 60034 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:50.022126 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:50.026127 systemd[1]: Started session-16.scope. Oct 31 00:53:50.026494 systemd-logind[1304]: New session 16 of user core. Oct 31 00:53:50.256208 sshd[3617]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:50.258380 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:60044.service. Oct 31 00:53:50.264886 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:60034.service: Deactivated successfully. Oct 31 00:53:50.266029 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:53:50.266047 systemd-logind[1304]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:53:50.267081 systemd-logind[1304]: Removed session 16. Oct 31 00:53:50.305941 sshd[3632]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:50.307843 sshd[3632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:50.312862 systemd[1]: Started session-17.scope. Oct 31 00:53:50.313230 systemd-logind[1304]: New session 17 of user core. Oct 31 00:53:50.452012 sshd[3632]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:50.454858 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:60044.service: Deactivated successfully. Oct 31 00:53:50.455648 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:53:50.456564 systemd-logind[1304]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:53:50.457434 systemd-logind[1304]: Removed session 17. Oct 31 00:53:55.456670 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:60058.service. Oct 31 00:53:55.503606 sshd[3653]: Accepted publickey for core from 10.0.0.1 port 60058 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:53:55.505467 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:53:55.509945 systemd[1]: Started session-18.scope. Oct 31 00:53:55.510259 systemd-logind[1304]: New session 18 of user core. Oct 31 00:53:55.627032 sshd[3653]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:55.630168 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:60058.service: Deactivated successfully. Oct 31 00:53:55.631186 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:53:55.631497 systemd-logind[1304]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:53:55.632206 systemd-logind[1304]: Removed session 18. Oct 31 00:54:00.629856 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:34154.service. Oct 31 00:54:00.679508 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 34154 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:54:00.681449 sshd[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:54:00.685514 systemd-logind[1304]: New session 19 of user core. Oct 31 00:54:00.686018 systemd[1]: Started session-19.scope. Oct 31 00:54:00.796362 sshd[3669]: pam_unix(sshd:session): session closed for user core Oct 31 00:54:00.798935 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:34154.service: Deactivated successfully. Oct 31 00:54:00.799909 systemd-logind[1304]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:54:00.799994 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:54:00.800849 systemd-logind[1304]: Removed session 19. Oct 31 00:54:05.799617 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:34158.service. Oct 31 00:54:05.843689 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 34158 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:54:05.845088 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:54:05.849221 systemd-logind[1304]: New session 20 of user core. Oct 31 00:54:05.849954 systemd[1]: Started session-20.scope. Oct 31 00:54:05.981309 sshd[3683]: pam_unix(sshd:session): session closed for user core Oct 31 00:54:05.984032 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:34174.service. Oct 31 00:54:05.985852 systemd-logind[1304]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:54:05.986031 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:34158.service: Deactivated successfully. Oct 31 00:54:05.986870 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:54:05.987314 systemd-logind[1304]: Removed session 20. Oct 31 00:54:06.033705 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 34174 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:54:06.035534 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:54:06.039303 systemd-logind[1304]: New session 21 of user core. Oct 31 00:54:06.040279 systemd[1]: Started session-21.scope. Oct 31 00:54:07.865780 env[1321]: time="2025-10-31T00:54:07.864474284Z" level=info msg="StopContainer for \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\" with timeout 30 (s)" Oct 31 00:54:07.865780 env[1321]: time="2025-10-31T00:54:07.864839288Z" level=info msg="Stop container \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\" with signal terminated" Oct 31 00:54:07.884584 systemd[1]: run-containerd-runc-k8s.io-9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc-runc.o5i7Wc.mount: Deactivated successfully. Oct 31 00:54:07.893326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c-rootfs.mount: Deactivated successfully. Oct 31 00:54:07.905531 env[1321]: time="2025-10-31T00:54:07.905484890Z" level=info msg="shim disconnected" id=b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c Oct 31 00:54:07.905531 env[1321]: time="2025-10-31T00:54:07.905530771Z" level=warning msg="cleaning up after shim disconnected" id=b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c namespace=k8s.io Oct 31 00:54:07.905747 env[1321]: time="2025-10-31T00:54:07.905540451Z" level=info msg="cleaning up dead shim" Oct 31 00:54:07.906719 env[1321]: time="2025-10-31T00:54:07.906671784Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:54:07.911436 env[1321]: time="2025-10-31T00:54:07.911394920Z" level=info msg="StopContainer for \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\" with timeout 2 (s)" Oct 31 00:54:07.911686 env[1321]: time="2025-10-31T00:54:07.911661163Z" level=info msg="Stop container \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\" with signal terminated" Oct 31 00:54:07.913165 env[1321]: time="2025-10-31T00:54:07.913126061Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3744 runtime=io.containerd.runc.v2\n" Oct 31 00:54:07.915691 env[1321]: time="2025-10-31T00:54:07.915653891Z" level=info msg="StopContainer for \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\" returns successfully" Oct 31 00:54:07.916302 env[1321]: time="2025-10-31T00:54:07.916265098Z" level=info msg="StopPodSandbox for \"2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8\"" Oct 31 00:54:07.916458 env[1321]: time="2025-10-31T00:54:07.916435980Z" level=info msg="Container to stop \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:54:07.918427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8-shm.mount: Deactivated successfully. Oct 31 00:54:07.919790 systemd-networkd[1097]: lxc_health: Link DOWN Oct 31 00:54:07.919801 systemd-networkd[1097]: lxc_health: Lost carrier Oct 31 00:54:07.949066 env[1321]: time="2025-10-31T00:54:07.949003926Z" level=info msg="shim disconnected" id=2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8 Oct 31 00:54:07.949066 env[1321]: time="2025-10-31T00:54:07.949058727Z" level=warning msg="cleaning up after shim disconnected" id=2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8 namespace=k8s.io Oct 31 00:54:07.949066 env[1321]: time="2025-10-31T00:54:07.949068367Z" level=info msg="cleaning up dead shim" Oct 31 00:54:07.957281 env[1321]: time="2025-10-31T00:54:07.957233583Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3789 runtime=io.containerd.runc.v2\n" Oct 31 00:54:07.957571 env[1321]: time="2025-10-31T00:54:07.957537147Z" level=info msg="TearDown network for sandbox \"2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8\" successfully" Oct 31 00:54:07.957571 env[1321]: time="2025-10-31T00:54:07.957566307Z" level=info msg="StopPodSandbox for \"2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8\" returns successfully" Oct 31 00:54:07.970635 env[1321]: time="2025-10-31T00:54:07.970591982Z" level=info msg="shim disconnected" id=9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc Oct 31 00:54:07.970852 env[1321]: time="2025-10-31T00:54:07.970833465Z" level=warning msg="cleaning up after shim disconnected" id=9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc namespace=k8s.io Oct 31 00:54:07.970909 env[1321]: time="2025-10-31T00:54:07.970896665Z" level=info msg="cleaning up dead shim" Oct 31 00:54:07.978448 env[1321]: time="2025-10-31T00:54:07.978407994Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3815 runtime=io.containerd.runc.v2\n" Oct 31 00:54:07.980622 env[1321]: time="2025-10-31T00:54:07.980582620Z" level=info msg="StopContainer for \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\" returns successfully" Oct 31 00:54:07.981351 env[1321]: time="2025-10-31T00:54:07.981319549Z" level=info msg="StopPodSandbox for \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\"" Oct 31 00:54:07.981432 env[1321]: time="2025-10-31T00:54:07.981382990Z" level=info msg="Container to stop \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:54:07.981432 env[1321]: time="2025-10-31T00:54:07.981397910Z" level=info msg="Container to stop \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:54:07.981432 env[1321]: time="2025-10-31T00:54:07.981410150Z" level=info msg="Container to stop \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:54:07.981432 env[1321]: time="2025-10-31T00:54:07.981421790Z" level=info msg="Container to stop \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:54:07.981600 env[1321]: time="2025-10-31T00:54:07.981433230Z" level=info msg="Container to stop \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:54:08.002500 env[1321]: time="2025-10-31T00:54:08.002450719Z" level=info msg="shim disconnected" id=454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a Oct 31 00:54:08.002719 env[1321]: time="2025-10-31T00:54:08.002701962Z" level=warning msg="cleaning up after shim disconnected" id=454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a namespace=k8s.io Oct 31 00:54:08.002784 env[1321]: time="2025-10-31T00:54:08.002771323Z" level=info msg="cleaning up dead shim" Oct 31 00:54:08.010795 env[1321]: time="2025-10-31T00:54:08.010752295Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3847 runtime=io.containerd.runc.v2\n" Oct 31 00:54:08.011092 env[1321]: time="2025-10-31T00:54:08.011059938Z" level=info msg="TearDown network for sandbox \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" successfully" Oct 31 00:54:08.011145 env[1321]: time="2025-10-31T00:54:08.011096659Z" level=info msg="StopPodSandbox for \"454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a\" returns successfully" Oct 31 00:54:08.063656 kubelet[2081]: I1031 00:54:08.063617 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-cgroup\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064077 kubelet[2081]: I1031 00:54:08.064059 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-config-path\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064202 kubelet[2081]: I1031 00:54:08.064187 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-kernel\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064280 kubelet[2081]: I1031 00:54:08.064267 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-etc-cni-netd\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064349 kubelet[2081]: I1031 00:54:08.064338 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-run\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064423 kubelet[2081]: I1031 00:54:08.064410 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-bpf-maps\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064498 kubelet[2081]: I1031 00:54:08.064482 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hostproc\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064574 kubelet[2081]: I1031 00:54:08.064561 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-xtables-lock\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064669 kubelet[2081]: I1031 00:54:08.064656 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcvkj\" (UniqueName: \"kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-kube-api-access-lcvkj\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064738 kubelet[2081]: I1031 00:54:08.064726 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-lib-modules\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064812 kubelet[2081]: I1031 00:54:08.064801 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hubble-tls\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.064893 kubelet[2081]: I1031 00:54:08.064880 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djfbs\" (UniqueName: \"kubernetes.io/projected/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-kube-api-access-djfbs\") pod \"5cdd9499-c2b0-4ffa-ae7d-5b76550b065a\" (UID: \"5cdd9499-c2b0-4ffa-ae7d-5b76550b065a\") " Oct 31 00:54:08.064969 kubelet[2081]: I1031 00:54:08.064956 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cni-path\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.065048 kubelet[2081]: I1031 00:54:08.065035 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-clustermesh-secrets\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.065133 kubelet[2081]: I1031 00:54:08.065120 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-net\") pod \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\" (UID: \"d6153ec7-32f9-44ec-b0dc-0c7fd399491a\") " Oct 31 00:54:08.065214 kubelet[2081]: I1031 00:54:08.065202 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-cilium-config-path\") pod \"5cdd9499-c2b0-4ffa-ae7d-5b76550b065a\" (UID: \"5cdd9499-c2b0-4ffa-ae7d-5b76550b065a\") " Oct 31 00:54:08.065406 kubelet[2081]: I1031 00:54:08.065366 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.065582 kubelet[2081]: I1031 00:54:08.065553 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075430 kubelet[2081]: I1031 00:54:08.075388 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cni-path" (OuterVolumeSpecName: "cni-path") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075530 kubelet[2081]: I1031 00:54:08.075458 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075530 kubelet[2081]: I1031 00:54:08.075498 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075530 kubelet[2081]: I1031 00:54:08.075517 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075613 kubelet[2081]: I1031 00:54:08.075532 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hostproc" (OuterVolumeSpecName: "hostproc") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075613 kubelet[2081]: I1031 00:54:08.075547 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075613 kubelet[2081]: I1031 00:54:08.075548 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075711 kubelet[2081]: I1031 00:54:08.075416 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cdd9499-c2b0-4ffa-ae7d-5b76550b065a" (UID: "5cdd9499-c2b0-4ffa-ae7d-5b76550b065a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:54:08.075830 kubelet[2081]: I1031 00:54:08.075813 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:08.075902 kubelet[2081]: I1031 00:54:08.075829 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:54:08.076415 kubelet[2081]: I1031 00:54:08.076318 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-kube-api-access-djfbs" (OuterVolumeSpecName: "kube-api-access-djfbs") pod "5cdd9499-c2b0-4ffa-ae7d-5b76550b065a" (UID: "5cdd9499-c2b0-4ffa-ae7d-5b76550b065a"). InnerVolumeSpecName "kube-api-access-djfbs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:54:08.077454 kubelet[2081]: I1031 00:54:08.077413 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-kube-api-access-lcvkj" (OuterVolumeSpecName: "kube-api-access-lcvkj") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "kube-api-access-lcvkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:54:08.080666 kubelet[2081]: I1031 00:54:08.080626 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:54:08.083209 kubelet[2081]: I1031 00:54:08.083178 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d6153ec7-32f9-44ec-b0dc-0c7fd399491a" (UID: "d6153ec7-32f9-44ec-b0dc-0c7fd399491a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165668 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165706 2081 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165716 2081 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165762 2081 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcvkj\" (UniqueName: \"kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-kube-api-access-lcvkj\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165775 2081 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165782 2081 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165790 2081 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-djfbs\" (UniqueName: \"kubernetes.io/projected/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-kube-api-access-djfbs\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.166408 kubelet[2081]: I1031 00:54:08.165798 2081 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165806 2081 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165813 2081 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165821 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165830 2081 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165838 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165845 2081 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165852 2081 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.167195 kubelet[2081]: I1031 00:54:08.165860 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6153ec7-32f9-44ec-b0dc-0c7fd399491a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:08.549174 kubelet[2081]: I1031 00:54:08.549128 2081 scope.go:117] "RemoveContainer" containerID="b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c" Oct 31 00:54:08.553759 env[1321]: time="2025-10-31T00:54:08.553717398Z" level=info msg="RemoveContainer for \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\"" Oct 31 00:54:08.559379 env[1321]: time="2025-10-31T00:54:08.559320463Z" level=info msg="RemoveContainer for \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\" returns successfully" Oct 31 00:54:08.560124 kubelet[2081]: I1031 00:54:08.559600 2081 scope.go:117] "RemoveContainer" containerID="b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c" Oct 31 00:54:08.560215 env[1321]: time="2025-10-31T00:54:08.559814429Z" level=error msg="ContainerStatus for \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\": not found" Oct 31 00:54:08.560385 kubelet[2081]: E1031 00:54:08.560360 2081 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\": not found" containerID="b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c" Oct 31 00:54:08.564700 kubelet[2081]: I1031 00:54:08.564575 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c"} err="failed to get container status \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b68e093fd93a0f15ba018c48fa3ddaee2fabce6de18549b2cd0b61e28441408c\": not found" Oct 31 00:54:08.564797 kubelet[2081]: I1031 00:54:08.564784 2081 scope.go:117] "RemoveContainer" containerID="9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc" Oct 31 00:54:08.567985 env[1321]: time="2025-10-31T00:54:08.567717760Z" level=info msg="RemoveContainer for \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\"" Oct 31 00:54:08.570616 env[1321]: time="2025-10-31T00:54:08.570552793Z" level=info msg="RemoveContainer for \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\" returns successfully" Oct 31 00:54:08.570877 kubelet[2081]: I1031 00:54:08.570860 2081 scope.go:117] "RemoveContainer" containerID="83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2" Oct 31 00:54:08.577054 env[1321]: time="2025-10-31T00:54:08.572880020Z" level=info msg="RemoveContainer for \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\"" Oct 31 00:54:08.577054 env[1321]: time="2025-10-31T00:54:08.575222087Z" level=info msg="RemoveContainer for \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\" returns successfully" Oct 31 00:54:08.577054 env[1321]: time="2025-10-31T00:54:08.575984375Z" level=info msg="RemoveContainer for \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\"" Oct 31 00:54:08.577252 kubelet[2081]: I1031 00:54:08.575338 2081 scope.go:117] "RemoveContainer" containerID="2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5" Oct 31 00:54:08.578984 env[1321]: time="2025-10-31T00:54:08.578945410Z" level=info msg="RemoveContainer for \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\" returns successfully" Oct 31 00:54:08.579236 kubelet[2081]: I1031 00:54:08.579212 2081 scope.go:117] "RemoveContainer" containerID="ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9" Oct 31 00:54:08.580460 env[1321]: time="2025-10-31T00:54:08.580429107Z" level=info msg="RemoveContainer for \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\"" Oct 31 00:54:08.583809 env[1321]: time="2025-10-31T00:54:08.583768505Z" level=info msg="RemoveContainer for \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\" returns successfully" Oct 31 00:54:08.584092 kubelet[2081]: I1031 00:54:08.584056 2081 scope.go:117] "RemoveContainer" containerID="a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a" Oct 31 00:54:08.586035 env[1321]: time="2025-10-31T00:54:08.585717048Z" level=info msg="RemoveContainer for \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\"" Oct 31 00:54:08.590383 env[1321]: time="2025-10-31T00:54:08.590357701Z" level=info msg="RemoveContainer for \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\" returns successfully" Oct 31 00:54:08.590662 kubelet[2081]: I1031 00:54:08.590626 2081 scope.go:117] "RemoveContainer" containerID="9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc" Oct 31 00:54:08.590982 env[1321]: time="2025-10-31T00:54:08.590930188Z" level=error msg="ContainerStatus for \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\": not found" Oct 31 00:54:08.591104 kubelet[2081]: E1031 00:54:08.591067 2081 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\": not found" containerID="9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc" Oct 31 00:54:08.591146 kubelet[2081]: I1031 00:54:08.591099 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc"} err="failed to get container status \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc\": not found" Oct 31 00:54:08.591146 kubelet[2081]: I1031 00:54:08.591125 2081 scope.go:117] "RemoveContainer" containerID="83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2" Oct 31 00:54:08.591413 env[1321]: time="2025-10-31T00:54:08.591330072Z" level=error msg="ContainerStatus for \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\": not found" Oct 31 00:54:08.591501 kubelet[2081]: E1031 00:54:08.591475 2081 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\": not found" containerID="83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2" Oct 31 00:54:08.591549 kubelet[2081]: I1031 00:54:08.591503 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2"} err="failed to get container status \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"83f9e80bf947d2046fca362f8f315ddb22e36f0cc8c1241cafdd3ccecb5700d2\": not found" Oct 31 00:54:08.591549 kubelet[2081]: I1031 00:54:08.591520 2081 scope.go:117] "RemoveContainer" containerID="2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5" Oct 31 00:54:08.591783 env[1321]: time="2025-10-31T00:54:08.591740677Z" level=error msg="ContainerStatus for \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\": not found" Oct 31 00:54:08.591949 kubelet[2081]: E1031 00:54:08.591930 2081 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\": not found" containerID="2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5" Oct 31 00:54:08.591997 kubelet[2081]: I1031 00:54:08.591953 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5"} err="failed to get container status \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2492514274ce563bf872014f5fb48fbda3a3cf4839faa2cde34deb0286f9b9c5\": not found" Oct 31 00:54:08.591997 kubelet[2081]: I1031 00:54:08.591967 2081 scope.go:117] "RemoveContainer" containerID="ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9" Oct 31 00:54:08.592193 env[1321]: time="2025-10-31T00:54:08.592134402Z" level=error msg="ContainerStatus for \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\": not found" Oct 31 00:54:08.592287 kubelet[2081]: E1031 00:54:08.592270 2081 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\": not found" containerID="ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9" Oct 31 00:54:08.592330 kubelet[2081]: I1031 00:54:08.592291 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9"} err="failed to get container status \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca1366c550839fe9a1ad35e833f6dae113b4842188c51625c895023f1bc66db9\": not found" Oct 31 00:54:08.592330 kubelet[2081]: I1031 00:54:08.592307 2081 scope.go:117] "RemoveContainer" containerID="a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a" Oct 31 00:54:08.592524 env[1321]: time="2025-10-31T00:54:08.592484566Z" level=error msg="ContainerStatus for \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\": not found" Oct 31 00:54:08.592690 kubelet[2081]: E1031 00:54:08.592673 2081 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\": not found" containerID="a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a" Oct 31 00:54:08.592779 kubelet[2081]: I1031 00:54:08.592763 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a"} err="failed to get container status \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a89e7362e854c0a5e259bf5b64e115b9085e95d7db25b88b88736c6c8e46817a\": not found" Oct 31 00:54:08.878977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9efcda7e99df5f2d3eb3244481f738741289ad0b26873802e3234f26240003cc-rootfs.mount: Deactivated successfully. Oct 31 00:54:08.879145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2560f1ae4bd89a2ecee50499646f3623c34eb5d42fa15ab6e1f53aed9836e3a8-rootfs.mount: Deactivated successfully. Oct 31 00:54:08.879245 systemd[1]: var-lib-kubelet-pods-5cdd9499\x2dc2b0\x2d4ffa\x2dae7d\x2d5b76550b065a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjfbs.mount: Deactivated successfully. Oct 31 00:54:08.879332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a-rootfs.mount: Deactivated successfully. Oct 31 00:54:08.879410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-454730393785e79113473b150b5c224b4798d2eaba087675e16ea3935316b47a-shm.mount: Deactivated successfully. Oct 31 00:54:08.879487 systemd[1]: var-lib-kubelet-pods-d6153ec7\x2d32f9\x2d44ec\x2db0dc\x2d0c7fd399491a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlcvkj.mount: Deactivated successfully. Oct 31 00:54:08.879570 systemd[1]: var-lib-kubelet-pods-d6153ec7\x2d32f9\x2d44ec\x2db0dc\x2d0c7fd399491a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 31 00:54:08.879648 systemd[1]: var-lib-kubelet-pods-d6153ec7\x2d32f9\x2d44ec\x2db0dc\x2d0c7fd399491a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 31 00:54:09.817374 sshd[3695]: pam_unix(sshd:session): session closed for user core Oct 31 00:54:09.819716 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:57858.service. Oct 31 00:54:09.820680 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:34174.service: Deactivated successfully. Oct 31 00:54:09.821665 systemd-logind[1304]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:54:09.821727 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:54:09.823401 systemd-logind[1304]: Removed session 21. Oct 31 00:54:09.866112 sshd[3864]: Accepted publickey for core from 10.0.0.1 port 57858 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:54:09.867285 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:54:09.871122 systemd-logind[1304]: New session 22 of user core. Oct 31 00:54:09.871456 systemd[1]: Started session-22.scope. Oct 31 00:54:10.374202 kubelet[2081]: I1031 00:54:10.374170 2081 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cdd9499-c2b0-4ffa-ae7d-5b76550b065a" path="/var/lib/kubelet/pods/5cdd9499-c2b0-4ffa-ae7d-5b76550b065a/volumes" Oct 31 00:54:10.374580 kubelet[2081]: I1031 00:54:10.374563 2081 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6153ec7-32f9-44ec-b0dc-0c7fd399491a" path="/var/lib/kubelet/pods/d6153ec7-32f9-44ec-b0dc-0c7fd399491a/volumes" Oct 31 00:54:10.429392 kubelet[2081]: E1031 00:54:10.429350 2081 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 31 00:54:11.316760 sshd[3864]: pam_unix(sshd:session): session closed for user core Oct 31 00:54:11.318535 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:57864.service. Oct 31 00:54:11.320481 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:57858.service: Deactivated successfully. Oct 31 00:54:11.321426 systemd-logind[1304]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:54:11.321487 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:54:11.327525 systemd-logind[1304]: Removed session 22. Oct 31 00:54:11.347486 kubelet[2081]: I1031 00:54:11.347446 2081 memory_manager.go:355] "RemoveStaleState removing state" podUID="d6153ec7-32f9-44ec-b0dc-0c7fd399491a" containerName="cilium-agent" Oct 31 00:54:11.347648 kubelet[2081]: I1031 00:54:11.347636 2081 memory_manager.go:355] "RemoveStaleState removing state" podUID="5cdd9499-c2b0-4ffa-ae7d-5b76550b065a" containerName="cilium-operator" Oct 31 00:54:11.374086 sshd[3877]: Accepted publickey for core from 10.0.0.1 port 57864 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:54:11.372477 sshd[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:54:11.376424 systemd-logind[1304]: New session 23 of user core. Oct 31 00:54:11.377219 systemd[1]: Started session-23.scope. Oct 31 00:54:11.386587 kubelet[2081]: I1031 00:54:11.386531 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58dc\" (UniqueName: \"kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-kube-api-access-n58dc\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.386587 kubelet[2081]: I1031 00:54:11.386574 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cni-path\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.386907 kubelet[2081]: I1031 00:54:11.386593 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-clustermesh-secrets\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.386907 kubelet[2081]: I1031 00:54:11.386609 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-kernel\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.386907 kubelet[2081]: I1031 00:54:11.386630 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-xtables-lock\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.386907 kubelet[2081]: I1031 00:54:11.386650 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-config-path\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.386907 kubelet[2081]: I1031 00:54:11.386667 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-bpf-maps\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.386907 kubelet[2081]: I1031 00:54:11.386684 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-lib-modules\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.387048 kubelet[2081]: I1031 00:54:11.386698 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-cgroup\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.387048 kubelet[2081]: I1031 00:54:11.386712 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-etc-cni-netd\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.387048 kubelet[2081]: I1031 00:54:11.386729 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-ipsec-secrets\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.387048 kubelet[2081]: I1031 00:54:11.386744 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-run\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.387048 kubelet[2081]: I1031 00:54:11.386757 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-hostproc\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.387048 kubelet[2081]: I1031 00:54:11.386775 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-net\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.387218 kubelet[2081]: I1031 00:54:11.386794 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-hubble-tls\") pod \"cilium-l55jg\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " pod="kube-system/cilium-l55jg" Oct 31 00:54:11.506747 sshd[3877]: pam_unix(sshd:session): session closed for user core Oct 31 00:54:11.513382 systemd[1]: Started sshd@23-10.0.0.90:22-10.0.0.1:57868.service. Oct 31 00:54:11.514400 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:57864.service: Deactivated successfully. Oct 31 00:54:11.515547 systemd-logind[1304]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:54:11.519267 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:54:11.525302 kubelet[2081]: E1031 00:54:11.520481 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:11.522311 systemd-logind[1304]: Removed session 23. Oct 31 00:54:11.525474 env[1321]: time="2025-10-31T00:54:11.521712465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l55jg,Uid:5d6376af-cb3c-4319-8288-952fc76b66d5,Namespace:kube-system,Attempt:0,}" Oct 31 00:54:11.539408 env[1321]: time="2025-10-31T00:54:11.539336093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:54:11.539408 env[1321]: time="2025-10-31T00:54:11.539375773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:54:11.539408 env[1321]: time="2025-10-31T00:54:11.539385893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:54:11.539725 env[1321]: time="2025-10-31T00:54:11.539513135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b pid=3907 runtime=io.containerd.runc.v2 Oct 31 00:54:11.567651 sshd[3895]: Accepted publickey for core from 10.0.0.1 port 57868 ssh2: RSA SHA256:U8uh4tNlAoztP9XwPhxxRCHpcOqZ9ym/JukaPHih73U Oct 31 00:54:11.569623 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:54:11.575087 systemd[1]: Started session-24.scope. Oct 31 00:54:11.575508 systemd-logind[1304]: New session 24 of user core. Oct 31 00:54:11.578779 env[1321]: time="2025-10-31T00:54:11.578739232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l55jg,Uid:5d6376af-cb3c-4319-8288-952fc76b66d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b\"" Oct 31 00:54:11.580585 kubelet[2081]: E1031 00:54:11.580257 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:11.582977 env[1321]: time="2025-10-31T00:54:11.582933917Z" level=info msg="CreateContainer within sandbox \"892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 31 00:54:11.593684 env[1321]: time="2025-10-31T00:54:11.593644591Z" level=info msg="CreateContainer within sandbox \"892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4\"" Oct 31 00:54:11.594341 env[1321]: time="2025-10-31T00:54:11.594312158Z" level=info msg="StartContainer for \"a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4\"" Oct 31 00:54:11.640464 env[1321]: time="2025-10-31T00:54:11.640399729Z" level=info msg="StartContainer for \"a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4\" returns successfully" Oct 31 00:54:11.674440 env[1321]: time="2025-10-31T00:54:11.674387411Z" level=info msg="shim disconnected" id=a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4 Oct 31 00:54:11.674737 env[1321]: time="2025-10-31T00:54:11.674717534Z" level=warning msg="cleaning up after shim disconnected" id=a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4 namespace=k8s.io Oct 31 00:54:11.674821 env[1321]: time="2025-10-31T00:54:11.674808055Z" level=info msg="cleaning up dead shim" Oct 31 00:54:11.682228 env[1321]: time="2025-10-31T00:54:11.682132173Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3998 runtime=io.containerd.runc.v2\n" Oct 31 00:54:11.766427 kubelet[2081]: I1031 00:54:11.766128 2081 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-31T00:54:11Z","lastTransitionTime":"2025-10-31T00:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 31 00:54:12.564084 env[1321]: time="2025-10-31T00:54:12.564035005Z" level=info msg="StopPodSandbox for \"892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b\"" Oct 31 00:54:12.564478 env[1321]: time="2025-10-31T00:54:12.564097366Z" level=info msg="Container to stop \"a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 31 00:54:12.566442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b-shm.mount: Deactivated successfully. Oct 31 00:54:12.591326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b-rootfs.mount: Deactivated successfully. Oct 31 00:54:12.595722 env[1321]: time="2025-10-31T00:54:12.595665693Z" level=info msg="shim disconnected" id=892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b Oct 31 00:54:12.595842 env[1321]: time="2025-10-31T00:54:12.595724854Z" level=warning msg="cleaning up after shim disconnected" id=892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b namespace=k8s.io Oct 31 00:54:12.595842 env[1321]: time="2025-10-31T00:54:12.595735534Z" level=info msg="cleaning up dead shim" Oct 31 00:54:12.602768 env[1321]: time="2025-10-31T00:54:12.602726126Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4031 runtime=io.containerd.runc.v2\n" Oct 31 00:54:12.603057 env[1321]: time="2025-10-31T00:54:12.603031929Z" level=info msg="TearDown network for sandbox \"892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b\" successfully" Oct 31 00:54:12.603096 env[1321]: time="2025-10-31T00:54:12.603057090Z" level=info msg="StopPodSandbox for \"892b495f43b2535646d4eda8ef596840b788d32643bebd9b64efacb6a0e5080b\" returns successfully" Oct 31 00:54:12.696627 kubelet[2081]: I1031 00:54:12.696239 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-bpf-maps\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.696627 kubelet[2081]: I1031 00:54:12.696294 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n58dc\" (UniqueName: \"kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-kube-api-access-n58dc\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.696627 kubelet[2081]: I1031 00:54:12.696312 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-kernel\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.696627 kubelet[2081]: I1031 00:54:12.696325 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-xtables-lock\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.696627 kubelet[2081]: I1031 00:54:12.696347 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-ipsec-secrets\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.696627 kubelet[2081]: I1031 00:54:12.696360 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cni-path\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697125 kubelet[2081]: I1031 00:54:12.696377 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-config-path\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697125 kubelet[2081]: I1031 00:54:12.696431 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.697125 kubelet[2081]: I1031 00:54:12.696428 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.697125 kubelet[2081]: I1031 00:54:12.696483 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.697125 kubelet[2081]: I1031 00:54:12.696499 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cni-path" (OuterVolumeSpecName: "cni-path") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.697321 kubelet[2081]: I1031 00:54:12.696765 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-net\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697321 kubelet[2081]: I1031 00:54:12.696811 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.697321 kubelet[2081]: I1031 00:54:12.696870 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-cgroup\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697321 kubelet[2081]: I1031 00:54:12.696901 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.697321 kubelet[2081]: I1031 00:54:12.696915 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-run\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697441 kubelet[2081]: I1031 00:54:12.696930 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-hostproc\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697441 kubelet[2081]: I1031 00:54:12.696946 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-etc-cni-netd\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697441 kubelet[2081]: I1031 00:54:12.696961 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-lib-modules\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697441 kubelet[2081]: I1031 00:54:12.697063 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-hubble-tls\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.697441 kubelet[2081]: I1031 00:54:12.697083 2081 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-clustermesh-secrets\") pod \"5d6376af-cb3c-4319-8288-952fc76b66d5\" (UID: \"5d6376af-cb3c-4319-8288-952fc76b66d5\") " Oct 31 00:54:12.700703 systemd[1]: var-lib-kubelet-pods-5d6376af\x2dcb3c\x2d4319\x2d8288\x2d952fc76b66d5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 31 00:54:12.703173 kubelet[2081]: I1031 00:54:12.696979 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-hostproc" (OuterVolumeSpecName: "hostproc") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.703173 kubelet[2081]: I1031 00:54:12.696990 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.703173 kubelet[2081]: I1031 00:54:12.697023 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.703173 kubelet[2081]: I1031 00:54:12.697037 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 31 00:54:12.703173 kubelet[2081]: I1031 00:54:12.702319 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702787 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702805 2081 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702813 2081 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702822 2081 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702830 2081 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702837 2081 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702845 2081 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703364 kubelet[2081]: I1031 00:54:12.702865 2081 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703543 kubelet[2081]: I1031 00:54:12.702873 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703543 kubelet[2081]: I1031 00:54:12.702881 2081 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.703543 kubelet[2081]: I1031 00:54:12.702888 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.704350 kubelet[2081]: I1031 00:54:12.704311 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:54:12.704670 kubelet[2081]: I1031 00:54:12.704641 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:54:12.705988 systemd[1]: var-lib-kubelet-pods-5d6376af\x2dcb3c\x2d4319\x2d8288\x2d952fc76b66d5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 31 00:54:12.708534 kubelet[2081]: I1031 00:54:12.708494 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-kube-api-access-n58dc" (OuterVolumeSpecName: "kube-api-access-n58dc") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "kube-api-access-n58dc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:54:12.710203 kubelet[2081]: I1031 00:54:12.710163 2081 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5d6376af-cb3c-4319-8288-952fc76b66d5" (UID: "5d6376af-cb3c-4319-8288-952fc76b66d5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:54:12.803772 kubelet[2081]: I1031 00:54:12.803712 2081 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.803772 kubelet[2081]: I1031 00:54:12.803756 2081 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n58dc\" (UniqueName: \"kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-kube-api-access-n58dc\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.803772 kubelet[2081]: I1031 00:54:12.803768 2081 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d6376af-cb3c-4319-8288-952fc76b66d5-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:12.803772 kubelet[2081]: I1031 00:54:12.803779 2081 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d6376af-cb3c-4319-8288-952fc76b66d5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 31 00:54:13.491614 systemd[1]: var-lib-kubelet-pods-5d6376af\x2dcb3c\x2d4319\x2d8288\x2d952fc76b66d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn58dc.mount: Deactivated successfully. Oct 31 00:54:13.491765 systemd[1]: var-lib-kubelet-pods-5d6376af\x2dcb3c\x2d4319\x2d8288\x2d952fc76b66d5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 31 00:54:13.568817 kubelet[2081]: I1031 00:54:13.568763 2081 scope.go:117] "RemoveContainer" containerID="a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4" Oct 31 00:54:13.570594 env[1321]: time="2025-10-31T00:54:13.570559927Z" level=info msg="RemoveContainer for \"a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4\"" Oct 31 00:54:13.579820 env[1321]: time="2025-10-31T00:54:13.579771380Z" level=info msg="RemoveContainer for \"a0eba3a8812db8160311c6bfe7d918cfb71c0b6a1dcc18cfc6d7b9998ade63a4\" returns successfully" Oct 31 00:54:13.629825 kubelet[2081]: I1031 00:54:13.629784 2081 memory_manager.go:355] "RemoveStaleState removing state" podUID="5d6376af-cb3c-4319-8288-952fc76b66d5" containerName="mount-cgroup" Oct 31 00:54:13.717361 kubelet[2081]: I1031 00:54:13.717265 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82b2e436-1e08-432d-97b6-56fcbb348bd6-clustermesh-secrets\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717361 kubelet[2081]: I1031 00:54:13.717315 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-host-proc-sys-kernel\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717361 kubelet[2081]: I1031 00:54:13.717340 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-bpf-maps\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717361 kubelet[2081]: I1031 00:54:13.717358 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-cni-path\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717361 kubelet[2081]: I1031 00:54:13.717377 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-etc-cni-netd\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717815 kubelet[2081]: I1031 00:54:13.717428 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-cilium-cgroup\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717815 kubelet[2081]: I1031 00:54:13.717471 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-lib-modules\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717815 kubelet[2081]: I1031 00:54:13.717490 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/82b2e436-1e08-432d-97b6-56fcbb348bd6-cilium-ipsec-secrets\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717815 kubelet[2081]: I1031 00:54:13.717508 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-hostproc\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717815 kubelet[2081]: I1031 00:54:13.717524 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-host-proc-sys-net\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.717815 kubelet[2081]: I1031 00:54:13.717546 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgb2\" (UniqueName: \"kubernetes.io/projected/82b2e436-1e08-432d-97b6-56fcbb348bd6-kube-api-access-fdgb2\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.718135 kubelet[2081]: I1031 00:54:13.717563 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-cilium-run\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.718135 kubelet[2081]: I1031 00:54:13.717582 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82b2e436-1e08-432d-97b6-56fcbb348bd6-xtables-lock\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.718135 kubelet[2081]: I1031 00:54:13.717597 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82b2e436-1e08-432d-97b6-56fcbb348bd6-cilium-config-path\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.718135 kubelet[2081]: I1031 00:54:13.717613 2081 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82b2e436-1e08-432d-97b6-56fcbb348bd6-hubble-tls\") pod \"cilium-jf4rd\" (UID: \"82b2e436-1e08-432d-97b6-56fcbb348bd6\") " pod="kube-system/cilium-jf4rd" Oct 31 00:54:13.933769 kubelet[2081]: E1031 00:54:13.933668 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:13.935039 env[1321]: time="2025-10-31T00:54:13.934856045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jf4rd,Uid:82b2e436-1e08-432d-97b6-56fcbb348bd6,Namespace:kube-system,Attempt:0,}" Oct 31 00:54:13.953985 env[1321]: time="2025-10-31T00:54:13.953706356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:54:13.953985 env[1321]: time="2025-10-31T00:54:13.953749196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:54:13.953985 env[1321]: time="2025-10-31T00:54:13.953759356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:54:13.953985 env[1321]: time="2025-10-31T00:54:13.953907078Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06 pid=4060 runtime=io.containerd.runc.v2 Oct 31 00:54:13.993814 env[1321]: time="2025-10-31T00:54:13.993764280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jf4rd,Uid:82b2e436-1e08-432d-97b6-56fcbb348bd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\"" Oct 31 00:54:13.995471 kubelet[2081]: E1031 00:54:13.994893 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:13.997930 env[1321]: time="2025-10-31T00:54:13.997675720Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 31 00:54:14.013525 env[1321]: time="2025-10-31T00:54:14.013461156Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1906114fe58865e95648d5e011999d807ca6e4b11eeffad24562abc9614d26c4\"" Oct 31 00:54:14.015455 env[1321]: time="2025-10-31T00:54:14.015362615Z" level=info msg="StartContainer for \"1906114fe58865e95648d5e011999d807ca6e4b11eeffad24562abc9614d26c4\"" Oct 31 00:54:14.075800 env[1321]: time="2025-10-31T00:54:14.075749569Z" level=info msg="StartContainer for \"1906114fe58865e95648d5e011999d807ca6e4b11eeffad24562abc9614d26c4\" returns successfully" Oct 31 00:54:14.098945 env[1321]: time="2025-10-31T00:54:14.098899236Z" level=info msg="shim disconnected" id=1906114fe58865e95648d5e011999d807ca6e4b11eeffad24562abc9614d26c4 Oct 31 00:54:14.098945 env[1321]: time="2025-10-31T00:54:14.098942717Z" level=warning msg="cleaning up after shim disconnected" id=1906114fe58865e95648d5e011999d807ca6e4b11eeffad24562abc9614d26c4 namespace=k8s.io Oct 31 00:54:14.098945 env[1321]: time="2025-10-31T00:54:14.098952957Z" level=info msg="cleaning up dead shim" Oct 31 00:54:14.106198 env[1321]: time="2025-10-31T00:54:14.106139627Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4145 runtime=io.containerd.runc.v2\n" Oct 31 00:54:14.373669 kubelet[2081]: I1031 00:54:14.373617 2081 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6376af-cb3c-4319-8288-952fc76b66d5" path="/var/lib/kubelet/pods/5d6376af-cb3c-4319-8288-952fc76b66d5/volumes" Oct 31 00:54:14.574601 kubelet[2081]: E1031 00:54:14.574547 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:14.578274 env[1321]: time="2025-10-31T00:54:14.578120310Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 31 00:54:14.594423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970311706.mount: Deactivated successfully. Oct 31 00:54:14.603967 env[1321]: time="2025-10-31T00:54:14.603736322Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b7a2089230122760e11a7d07f26dea475f1831a555b9adc0a81c4eaa11cb03b3\"" Oct 31 00:54:14.604724 env[1321]: time="2025-10-31T00:54:14.604695012Z" level=info msg="StartContainer for \"b7a2089230122760e11a7d07f26dea475f1831a555b9adc0a81c4eaa11cb03b3\"" Oct 31 00:54:14.662718 env[1321]: time="2025-10-31T00:54:14.662625021Z" level=info msg="StartContainer for \"b7a2089230122760e11a7d07f26dea475f1831a555b9adc0a81c4eaa11cb03b3\" returns successfully" Oct 31 00:54:14.692869 env[1321]: time="2025-10-31T00:54:14.692825118Z" level=info msg="shim disconnected" id=b7a2089230122760e11a7d07f26dea475f1831a555b9adc0a81c4eaa11cb03b3 Oct 31 00:54:14.692869 env[1321]: time="2025-10-31T00:54:14.692868159Z" level=warning msg="cleaning up after shim disconnected" id=b7a2089230122760e11a7d07f26dea475f1831a555b9adc0a81c4eaa11cb03b3 namespace=k8s.io Oct 31 00:54:14.692869 env[1321]: time="2025-10-31T00:54:14.692877839Z" level=info msg="cleaning up dead shim" Oct 31 00:54:14.700008 env[1321]: time="2025-10-31T00:54:14.699966869Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4208 runtime=io.containerd.runc.v2\n" Oct 31 00:54:15.371939 kubelet[2081]: E1031 00:54:15.371886 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:15.430891 kubelet[2081]: E1031 00:54:15.430844 2081 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 31 00:54:15.491846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7a2089230122760e11a7d07f26dea475f1831a555b9adc0a81c4eaa11cb03b3-rootfs.mount: Deactivated successfully. Oct 31 00:54:15.583379 kubelet[2081]: E1031 00:54:15.583336 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:15.587028 env[1321]: time="2025-10-31T00:54:15.586984246Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 31 00:54:15.636493 env[1321]: time="2025-10-31T00:54:15.636373399Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ff38eb1d2af8dc9c21da05a926f7bdefce8e239bebf14729aab16efb16d8fee\"" Oct 31 00:54:15.638723 env[1321]: time="2025-10-31T00:54:15.637332648Z" level=info msg="StartContainer for \"7ff38eb1d2af8dc9c21da05a926f7bdefce8e239bebf14729aab16efb16d8fee\"" Oct 31 00:54:15.698165 env[1321]: time="2025-10-31T00:54:15.698104351Z" level=info msg="StartContainer for \"7ff38eb1d2af8dc9c21da05a926f7bdefce8e239bebf14729aab16efb16d8fee\" returns successfully" Oct 31 00:54:15.720507 env[1321]: time="2025-10-31T00:54:15.720454325Z" level=info msg="shim disconnected" id=7ff38eb1d2af8dc9c21da05a926f7bdefce8e239bebf14729aab16efb16d8fee Oct 31 00:54:15.720507 env[1321]: time="2025-10-31T00:54:15.720510365Z" level=warning msg="cleaning up after shim disconnected" id=7ff38eb1d2af8dc9c21da05a926f7bdefce8e239bebf14729aab16efb16d8fee namespace=k8s.io Oct 31 00:54:15.720725 env[1321]: time="2025-10-31T00:54:15.720520525Z" level=info msg="cleaning up dead shim" Oct 31 00:54:15.727227 env[1321]: time="2025-10-31T00:54:15.727177389Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4264 runtime=io.containerd.runc.v2\n" Oct 31 00:54:16.372289 kubelet[2081]: E1031 00:54:16.372097 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:16.492066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ff38eb1d2af8dc9c21da05a926f7bdefce8e239bebf14729aab16efb16d8fee-rootfs.mount: Deactivated successfully. Oct 31 00:54:16.587039 kubelet[2081]: E1031 00:54:16.586884 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:16.590286 env[1321]: time="2025-10-31T00:54:16.590096074Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 31 00:54:16.612018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910690639.mount: Deactivated successfully. Oct 31 00:54:16.616850 env[1321]: time="2025-10-31T00:54:16.616801084Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b5e09e79375c99c398b116c6a8564696cb70ccfcd82a0f5358e6570f5ed1c6b\"" Oct 31 00:54:16.617959 env[1321]: time="2025-10-31T00:54:16.617790453Z" level=info msg="StartContainer for \"6b5e09e79375c99c398b116c6a8564696cb70ccfcd82a0f5358e6570f5ed1c6b\"" Oct 31 00:54:16.678399 env[1321]: time="2025-10-31T00:54:16.678275338Z" level=info msg="StartContainer for \"6b5e09e79375c99c398b116c6a8564696cb70ccfcd82a0f5358e6570f5ed1c6b\" returns successfully" Oct 31 00:54:16.720986 env[1321]: time="2025-10-31T00:54:16.720939496Z" level=info msg="shim disconnected" id=6b5e09e79375c99c398b116c6a8564696cb70ccfcd82a0f5358e6570f5ed1c6b Oct 31 00:54:16.720986 env[1321]: time="2025-10-31T00:54:16.720982177Z" level=warning msg="cleaning up after shim disconnected" id=6b5e09e79375c99c398b116c6a8564696cb70ccfcd82a0f5358e6570f5ed1c6b namespace=k8s.io Oct 31 00:54:16.720986 env[1321]: time="2025-10-31T00:54:16.720991497Z" level=info msg="cleaning up dead shim" Oct 31 00:54:16.727626 env[1321]: time="2025-10-31T00:54:16.727562038Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:54:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4318 runtime=io.containerd.runc.v2\n" Oct 31 00:54:17.492237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b5e09e79375c99c398b116c6a8564696cb70ccfcd82a0f5358e6570f5ed1c6b-rootfs.mount: Deactivated successfully. Oct 31 00:54:17.589096 kubelet[2081]: E1031 00:54:17.589061 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:17.591523 env[1321]: time="2025-10-31T00:54:17.591481046Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 31 00:54:17.611251 env[1321]: time="2025-10-31T00:54:17.611203625Z" level=info msg="CreateContainer within sandbox \"c6929532b77a69007ea03dfff9d901dc1da6284f62e388060b6ae00821b08e06\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d077fb022dbe27813135655ebbff28d6173a8577ff2a77f6d4f3074a98a5f3d\"" Oct 31 00:54:17.613324 env[1321]: time="2025-10-31T00:54:17.613287604Z" level=info msg="StartContainer for \"4d077fb022dbe27813135655ebbff28d6173a8577ff2a77f6d4f3074a98a5f3d\"" Oct 31 00:54:17.668179 env[1321]: time="2025-10-31T00:54:17.664109947Z" level=info msg="StartContainer for \"4d077fb022dbe27813135655ebbff28d6173a8577ff2a77f6d4f3074a98a5f3d\" returns successfully" Oct 31 00:54:17.898369 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Oct 31 00:54:18.593347 kubelet[2081]: E1031 00:54:18.593306 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:19.934325 kubelet[2081]: E1031 00:54:19.934281 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:19.964101 systemd[1]: run-containerd-runc-k8s.io-4d077fb022dbe27813135655ebbff28d6173a8577ff2a77f6d4f3074a98a5f3d-runc.7qXFZ0.mount: Deactivated successfully. Oct 31 00:54:20.745649 systemd-networkd[1097]: lxc_health: Link UP Oct 31 00:54:20.760544 systemd-networkd[1097]: lxc_health: Gained carrier Oct 31 00:54:20.761216 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 31 00:54:21.935821 kubelet[2081]: E1031 00:54:21.935782 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:21.952364 kubelet[2081]: I1031 00:54:21.952296 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jf4rd" podStartSLOduration=8.952269464 podStartE2EDuration="8.952269464s" podCreationTimestamp="2025-10-31 00:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:54:18.676907688 +0000 UTC m=+88.405116184" watchObservedRunningTime="2025-10-31 00:54:21.952269464 +0000 UTC m=+91.680478000" Oct 31 00:54:22.081097 systemd[1]: run-containerd-runc-k8s.io-4d077fb022dbe27813135655ebbff28d6173a8577ff2a77f6d4f3074a98a5f3d-runc.TZysaL.mount: Deactivated successfully. Oct 31 00:54:22.117281 systemd-networkd[1097]: lxc_health: Gained IPv6LL Oct 31 00:54:22.371550 kubelet[2081]: E1031 00:54:22.371502 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:22.599928 kubelet[2081]: E1031 00:54:22.599874 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:23.601476 kubelet[2081]: E1031 00:54:23.601439 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:24.228273 systemd[1]: run-containerd-runc-k8s.io-4d077fb022dbe27813135655ebbff28d6173a8577ff2a77f6d4f3074a98a5f3d-runc.ouDrNr.mount: Deactivated successfully. Oct 31 00:54:26.371806 kubelet[2081]: E1031 00:54:26.371770 2081 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:54:26.383063 systemd[1]: run-containerd-runc-k8s.io-4d077fb022dbe27813135655ebbff28d6173a8577ff2a77f6d4f3074a98a5f3d-runc.fQzH7S.mount: Deactivated successfully. Oct 31 00:54:26.450417 kubelet[2081]: E1031 00:54:26.449843 2081 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:43456->127.0.0.1:40101: read tcp 127.0.0.1:43456->127.0.0.1:40101: read: connection reset by peer Oct 31 00:54:26.452641 sshd[3895]: pam_unix(sshd:session): session closed for user core Oct 31 00:54:26.455743 systemd[1]: sshd@23-10.0.0.90:22-10.0.0.1:57868.service: Deactivated successfully. Oct 31 00:54:26.456855 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:54:26.457330 systemd-logind[1304]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:54:26.458477 systemd-logind[1304]: Removed session 24.