Oct 29 00:24:21.726609 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 29 00:24:21.726631 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Oct 28 23:18:12 -00 2025 Oct 29 00:24:21.726640 kernel: efi: EFI v2.70 by EDK II Oct 29 00:24:21.726645 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 29 00:24:21.726650 kernel: random: crng init done Oct 29 00:24:21.726656 kernel: ACPI: Early table checksum verification disabled Oct 29 00:24:21.726662 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 29 00:24:21.726669 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 29 00:24:21.726675 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726681 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726686 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726692 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726697 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726703 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726711 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726717 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726723 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:24:21.726728 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 29 00:24:21.726734 kernel: NUMA: Failed to initialise from firmware Oct 29 00:24:21.726740 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 00:24:21.726746 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Oct 29 00:24:21.726752 kernel: Zone ranges: Oct 29 00:24:21.726757 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 00:24:21.726764 kernel: DMA32 empty Oct 29 00:24:21.726770 kernel: Normal empty Oct 29 00:24:21.726775 kernel: Movable zone start for each node Oct 29 00:24:21.726781 kernel: Early memory node ranges Oct 29 00:24:21.726786 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 29 00:24:21.726792 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 29 00:24:21.726798 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 29 00:24:21.726803 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 29 00:24:21.726809 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 29 00:24:21.726814 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 29 00:24:21.726820 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 29 00:24:21.726826 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 00:24:21.726833 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 29 00:24:21.726838 kernel: psci: probing for conduit method from ACPI. Oct 29 00:24:21.726844 kernel: psci: PSCIv1.1 detected in firmware. Oct 29 00:24:21.726850 kernel: psci: Using standard PSCI v0.2 function IDs Oct 29 00:24:21.726855 kernel: psci: Trusted OS migration not required Oct 29 00:24:21.726864 kernel: psci: SMC Calling Convention v1.1 Oct 29 00:24:21.726870 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 29 00:24:21.726878 kernel: ACPI: SRAT not present Oct 29 00:24:21.726885 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Oct 29 00:24:21.726891 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Oct 29 00:24:21.726898 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 29 00:24:21.726904 kernel: Detected PIPT I-cache on CPU0 Oct 29 00:24:21.726910 kernel: CPU features: detected: GIC system register CPU interface Oct 29 00:24:21.726916 kernel: CPU features: detected: Hardware dirty bit management Oct 29 00:24:21.726922 kernel: CPU features: detected: Spectre-v4 Oct 29 00:24:21.726928 kernel: CPU features: detected: Spectre-BHB Oct 29 00:24:21.726935 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 29 00:24:21.726942 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 29 00:24:21.726948 kernel: CPU features: detected: ARM erratum 1418040 Oct 29 00:24:21.726954 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 29 00:24:21.726960 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 29 00:24:21.726966 kernel: Policy zone: DMA Oct 29 00:24:21.726973 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ddcdcc5923a51dfb24bee27c235aa754769d72fd417f60397f96d58c38c7a3e3 Oct 29 00:24:21.726980 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 29 00:24:21.726986 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 00:24:21.726992 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 00:24:21.726998 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 00:24:21.727006 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Oct 29 00:24:21.727012 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 00:24:21.727018 kernel: trace event string verifier disabled Oct 29 00:24:21.727024 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 00:24:21.727031 kernel: rcu: RCU event tracing is enabled. Oct 29 00:24:21.727046 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 00:24:21.727053 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 00:24:21.727059 kernel: Tracing variant of Tasks RCU enabled. Oct 29 00:24:21.727066 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 00:24:21.727072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 00:24:21.727078 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 29 00:24:21.727091 kernel: GICv3: 256 SPIs implemented Oct 29 00:24:21.727098 kernel: GICv3: 0 Extended SPIs implemented Oct 29 00:24:21.727104 kernel: GICv3: Distributor has no Range Selector support Oct 29 00:24:21.727110 kernel: Root IRQ handler: gic_handle_irq Oct 29 00:24:21.727117 kernel: GICv3: 16 PPIs implemented Oct 29 00:24:21.727123 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 29 00:24:21.727129 kernel: ACPI: SRAT not present Oct 29 00:24:21.727135 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 29 00:24:21.727142 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 29 00:24:21.727150 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 29 00:24:21.727156 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 29 00:24:21.727163 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 29 00:24:21.727170 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:24:21.727177 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 29 00:24:21.727183 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 29 00:24:21.730181 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 29 00:24:21.730192 kernel: arm-pv: using stolen time PV Oct 29 00:24:21.730200 kernel: Console: colour dummy device 80x25 Oct 29 00:24:21.730208 kernel: ACPI: Core revision 20210730 Oct 29 00:24:21.730216 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 29 00:24:21.730224 kernel: pid_max: default: 32768 minimum: 301 Oct 29 00:24:21.730231 kernel: LSM: Security Framework initializing Oct 29 00:24:21.730248 kernel: SELinux: Initializing. Oct 29 00:24:21.730255 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:24:21.730263 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:24:21.730271 kernel: rcu: Hierarchical SRCU implementation. Oct 29 00:24:21.730291 kernel: Platform MSI: ITS@0x8080000 domain created Oct 29 00:24:21.730299 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 29 00:24:21.730306 kernel: Remapping and enabling EFI services. Oct 29 00:24:21.730313 kernel: smp: Bringing up secondary CPUs ... Oct 29 00:24:21.730320 kernel: Detected PIPT I-cache on CPU1 Oct 29 00:24:21.730329 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 29 00:24:21.730336 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 29 00:24:21.730342 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:24:21.730349 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 29 00:24:21.730357 kernel: Detected PIPT I-cache on CPU2 Oct 29 00:24:21.730364 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 29 00:24:21.730372 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 29 00:24:21.730379 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:24:21.730385 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 29 00:24:21.730391 kernel: Detected PIPT I-cache on CPU3 Oct 29 00:24:21.730407 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 29 00:24:21.730414 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 29 00:24:21.730420 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:24:21.730427 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 29 00:24:21.730440 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 00:24:21.730449 kernel: SMP: Total of 4 processors activated. Oct 29 00:24:21.730455 kernel: CPU features: detected: 32-bit EL0 Support Oct 29 00:24:21.730462 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 29 00:24:21.730470 kernel: CPU features: detected: Common not Private translations Oct 29 00:24:21.730476 kernel: CPU features: detected: CRC32 instructions Oct 29 00:24:21.730483 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 29 00:24:21.730490 kernel: CPU features: detected: LSE atomic instructions Oct 29 00:24:21.730498 kernel: CPU features: detected: Privileged Access Never Oct 29 00:24:21.730505 kernel: CPU features: detected: RAS Extension Support Oct 29 00:24:21.730512 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 29 00:24:21.730520 kernel: CPU: All CPU(s) started at EL1 Oct 29 00:24:21.730526 kernel: alternatives: patching kernel code Oct 29 00:24:21.730535 kernel: devtmpfs: initialized Oct 29 00:24:21.730541 kernel: KASLR enabled Oct 29 00:24:21.730548 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 00:24:21.730555 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 00:24:21.730562 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 00:24:21.730569 kernel: SMBIOS 3.0.0 present. Oct 29 00:24:21.730575 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 29 00:24:21.730582 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 00:24:21.730589 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 29 00:24:21.730597 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 29 00:24:21.730604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 29 00:24:21.730611 kernel: audit: initializing netlink subsys (disabled) Oct 29 00:24:21.730617 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Oct 29 00:24:21.730624 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 00:24:21.730631 kernel: cpuidle: using governor menu Oct 29 00:24:21.730638 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 29 00:24:21.730645 kernel: ASID allocator initialised with 32768 entries Oct 29 00:24:21.730651 kernel: ACPI: bus type PCI registered Oct 29 00:24:21.730659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 00:24:21.730666 kernel: Serial: AMBA PL011 UART driver Oct 29 00:24:21.730673 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 00:24:21.730680 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 29 00:24:21.730687 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 00:24:21.730694 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 29 00:24:21.730700 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 00:24:21.730707 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 29 00:24:21.730714 kernel: ACPI: Added _OSI(Module Device) Oct 29 00:24:21.730722 kernel: ACPI: Added _OSI(Processor Device) Oct 29 00:24:21.730729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 00:24:21.730736 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 29 00:24:21.730742 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 29 00:24:21.730749 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 29 00:24:21.730756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 00:24:21.730763 kernel: ACPI: Interpreter enabled Oct 29 00:24:21.730769 kernel: ACPI: Using GIC for interrupt routing Oct 29 00:24:21.730776 kernel: ACPI: MCFG table detected, 1 entries Oct 29 00:24:21.730785 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 29 00:24:21.730792 kernel: printk: console [ttyAMA0] enabled Oct 29 00:24:21.730799 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 00:24:21.730975 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 00:24:21.731059 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 00:24:21.731122 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 00:24:21.731183 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 29 00:24:21.731247 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 29 00:24:21.731256 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 29 00:24:21.731263 kernel: PCI host bridge to bus 0000:00 Oct 29 00:24:21.731340 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 29 00:24:21.731412 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 29 00:24:21.731478 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 29 00:24:21.731533 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 00:24:21.736744 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 29 00:24:21.736841 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 29 00:24:21.736905 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 29 00:24:21.736969 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 29 00:24:21.737030 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 29 00:24:21.737115 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 29 00:24:21.737177 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 29 00:24:21.737244 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 29 00:24:21.737304 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 29 00:24:21.737358 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 29 00:24:21.737430 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 29 00:24:21.737440 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 29 00:24:21.737447 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 29 00:24:21.737454 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 29 00:24:21.737461 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 29 00:24:21.737470 kernel: iommu: Default domain type: Translated Oct 29 00:24:21.737477 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 29 00:24:21.737484 kernel: vgaarb: loaded Oct 29 00:24:21.737491 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 29 00:24:21.737498 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 29 00:24:21.737504 kernel: PTP clock support registered Oct 29 00:24:21.737511 kernel: Registered efivars operations Oct 29 00:24:21.737518 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 29 00:24:21.737524 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 00:24:21.737533 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 00:24:21.737540 kernel: pnp: PnP ACPI init Oct 29 00:24:21.737612 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 29 00:24:21.737623 kernel: pnp: PnP ACPI: found 1 devices Oct 29 00:24:21.737630 kernel: NET: Registered PF_INET protocol family Oct 29 00:24:21.737637 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 00:24:21.737644 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 00:24:21.737651 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 00:24:21.737660 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 00:24:21.737667 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 29 00:24:21.737674 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 00:24:21.737681 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:24:21.737688 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:24:21.737695 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 00:24:21.737702 kernel: PCI: CLS 0 bytes, default 64 Oct 29 00:24:21.737709 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 29 00:24:21.737715 kernel: kvm [1]: HYP mode not available Oct 29 00:24:21.737723 kernel: Initialise system trusted keyrings Oct 29 00:24:21.737730 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 00:24:21.737737 kernel: Key type asymmetric registered Oct 29 00:24:21.737743 kernel: Asymmetric key parser 'x509' registered Oct 29 00:24:21.737750 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 00:24:21.737756 kernel: io scheduler mq-deadline registered Oct 29 00:24:21.737763 kernel: io scheduler kyber registered Oct 29 00:24:21.737770 kernel: io scheduler bfq registered Oct 29 00:24:21.737777 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 29 00:24:21.737785 kernel: ACPI: button: Power Button [PWRB] Oct 29 00:24:21.737792 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 29 00:24:21.737854 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 29 00:24:21.737863 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 00:24:21.737869 kernel: thunder_xcv, ver 1.0 Oct 29 00:24:21.737876 kernel: thunder_bgx, ver 1.0 Oct 29 00:24:21.737883 kernel: nicpf, ver 1.0 Oct 29 00:24:21.737889 kernel: nicvf, ver 1.0 Oct 29 00:24:21.737969 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 29 00:24:21.738030 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-29T00:24:21 UTC (1761697461) Oct 29 00:24:21.738049 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 00:24:21.738057 kernel: NET: Registered PF_INET6 protocol family Oct 29 00:24:21.738064 kernel: Segment Routing with IPv6 Oct 29 00:24:21.738070 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 00:24:21.738077 kernel: NET: Registered PF_PACKET protocol family Oct 29 00:24:21.738084 kernel: Key type dns_resolver registered Oct 29 00:24:21.738091 kernel: registered taskstats version 1 Oct 29 00:24:21.738100 kernel: Loading compiled-in X.509 certificates Oct 29 00:24:21.738107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 365034a3270fb89208cc05b5e556df135e9c6322' Oct 29 00:24:21.738114 kernel: Key type .fscrypt registered Oct 29 00:24:21.738121 kernel: Key type fscrypt-provisioning registered Oct 29 00:24:21.738128 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 00:24:21.738135 kernel: ima: Allocated hash algorithm: sha1 Oct 29 00:24:21.738142 kernel: ima: No architecture policies found Oct 29 00:24:21.738149 kernel: clk: Disabling unused clocks Oct 29 00:24:21.738156 kernel: Freeing unused kernel memory: 36416K Oct 29 00:24:21.738164 kernel: Run /init as init process Oct 29 00:24:21.738170 kernel: with arguments: Oct 29 00:24:21.738178 kernel: /init Oct 29 00:24:21.738184 kernel: with environment: Oct 29 00:24:21.738191 kernel: HOME=/ Oct 29 00:24:21.738198 kernel: TERM=linux Oct 29 00:24:21.738204 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 29 00:24:21.738213 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 00:24:21.738224 systemd[1]: Detected virtualization kvm. Oct 29 00:24:21.738232 systemd[1]: Detected architecture arm64. Oct 29 00:24:21.738239 systemd[1]: Running in initrd. Oct 29 00:24:21.738246 systemd[1]: No hostname configured, using default hostname. Oct 29 00:24:21.738253 systemd[1]: Hostname set to . Oct 29 00:24:21.738260 systemd[1]: Initializing machine ID from VM UUID. Oct 29 00:24:21.738267 systemd[1]: Queued start job for default target initrd.target. Oct 29 00:24:21.738274 systemd[1]: Started systemd-ask-password-console.path. Oct 29 00:24:21.738283 systemd[1]: Reached target cryptsetup.target. Oct 29 00:24:21.738290 systemd[1]: Reached target paths.target. Oct 29 00:24:21.738297 systemd[1]: Reached target slices.target. Oct 29 00:24:21.738305 systemd[1]: Reached target swap.target. Oct 29 00:24:21.738312 systemd[1]: Reached target timers.target. Oct 29 00:24:21.738319 systemd[1]: Listening on iscsid.socket. Oct 29 00:24:21.738326 systemd[1]: Listening on iscsiuio.socket. Oct 29 00:24:21.738335 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 00:24:21.738342 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 00:24:21.738349 systemd[1]: Listening on systemd-journald.socket. Oct 29 00:24:21.738356 systemd[1]: Listening on systemd-networkd.socket. Oct 29 00:24:21.738364 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 00:24:21.738371 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 00:24:21.738379 systemd[1]: Reached target sockets.target. Oct 29 00:24:21.738386 systemd[1]: Starting kmod-static-nodes.service... Oct 29 00:24:21.738393 systemd[1]: Finished network-cleanup.service. Oct 29 00:24:21.738413 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 00:24:21.738421 systemd[1]: Starting systemd-journald.service... Oct 29 00:24:21.738428 systemd[1]: Starting systemd-modules-load.service... Oct 29 00:24:21.738436 systemd[1]: Starting systemd-resolved.service... Oct 29 00:24:21.738444 systemd[1]: Starting systemd-vconsole-setup.service... Oct 29 00:24:21.738451 systemd[1]: Finished kmod-static-nodes.service. Oct 29 00:24:21.738459 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 00:24:21.738466 kernel: audit: type=1130 audit(1761697461.727:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.738476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 00:24:21.738491 systemd-journald[290]: Journal started Oct 29 00:24:21.738547 systemd-journald[290]: Runtime Journal (/run/log/journal/27a5fa4892f04523b9d56a7cfff4b5e9) is 6.0M, max 48.7M, 42.6M free. Oct 29 00:24:21.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.727657 systemd-modules-load[291]: Inserted module 'overlay' Oct 29 00:24:21.741664 systemd[1]: Started systemd-journald.service. Oct 29 00:24:21.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.742211 systemd[1]: Finished systemd-vconsole-setup.service. Oct 29 00:24:21.748979 kernel: audit: type=1130 audit(1761697461.741:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.749007 kernel: audit: type=1130 audit(1761697461.744:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.745312 systemd-resolved[292]: Positive Trust Anchors: Oct 29 00:24:21.753731 kernel: audit: type=1130 audit(1761697461.749:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.753763 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 00:24:21.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.745321 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:24:21.745350 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 00:24:21.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.745630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 00:24:21.769782 kernel: audit: type=1130 audit(1761697461.757:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.769810 kernel: Bridge firewalling registered Oct 29 00:24:21.749979 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 29 00:24:21.751068 systemd[1]: Starting dracut-cmdline-ask.service... Oct 29 00:24:21.756391 systemd[1]: Started systemd-resolved.service. Oct 29 00:24:21.758500 systemd[1]: Reached target nss-lookup.target. Oct 29 00:24:21.767140 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 29 00:24:21.779431 kernel: SCSI subsystem initialized Oct 29 00:24:21.780286 systemd[1]: Finished dracut-cmdline-ask.service. Oct 29 00:24:21.785550 kernel: audit: type=1130 audit(1761697461.781:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.782222 systemd[1]: Starting dracut-cmdline.service... Oct 29 00:24:21.790196 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 00:24:21.790220 kernel: device-mapper: uevent: version 1.0.3 Oct 29 00:24:21.790236 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 29 00:24:21.792116 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 29 00:24:21.793013 systemd[1]: Finished systemd-modules-load.service. Oct 29 00:24:21.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.797719 systemd[1]: Starting systemd-sysctl.service... Oct 29 00:24:21.799691 kernel: audit: type=1130 audit(1761697461.793:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.800193 dracut-cmdline[307]: dracut-dracut-053 Oct 29 00:24:21.803070 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ddcdcc5923a51dfb24bee27c235aa754769d72fd417f60397f96d58c38c7a3e3 Oct 29 00:24:21.805087 systemd[1]: Finished systemd-sysctl.service. Oct 29 00:24:21.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.812440 kernel: audit: type=1130 audit(1761697461.808:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.870423 kernel: Loading iSCSI transport class v2.0-870. Oct 29 00:24:21.882417 kernel: iscsi: registered transport (tcp) Oct 29 00:24:21.898431 kernel: iscsi: registered transport (qla4xxx) Oct 29 00:24:21.898493 kernel: QLogic iSCSI HBA Driver Oct 29 00:24:21.936262 systemd[1]: Finished dracut-cmdline.service. Oct 29 00:24:21.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.938151 systemd[1]: Starting dracut-pre-udev.service... Oct 29 00:24:21.941536 kernel: audit: type=1130 audit(1761697461.936:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:21.983438 kernel: raid6: neonx8 gen() 13740 MB/s Oct 29 00:24:22.000425 kernel: raid6: neonx8 xor() 10808 MB/s Oct 29 00:24:22.017420 kernel: raid6: neonx4 gen() 13467 MB/s Oct 29 00:24:22.034420 kernel: raid6: neonx4 xor() 11147 MB/s Oct 29 00:24:22.051418 kernel: raid6: neonx2 gen() 13007 MB/s Oct 29 00:24:22.068421 kernel: raid6: neonx2 xor() 10292 MB/s Oct 29 00:24:22.085422 kernel: raid6: neonx1 gen() 10524 MB/s Oct 29 00:24:22.102421 kernel: raid6: neonx1 xor() 8755 MB/s Oct 29 00:24:22.119420 kernel: raid6: int64x8 gen() 6254 MB/s Oct 29 00:24:22.136423 kernel: raid6: int64x8 xor() 3539 MB/s Oct 29 00:24:22.153428 kernel: raid6: int64x4 gen() 7212 MB/s Oct 29 00:24:22.170427 kernel: raid6: int64x4 xor() 3848 MB/s Oct 29 00:24:22.187424 kernel: raid6: int64x2 gen() 6145 MB/s Oct 29 00:24:22.204441 kernel: raid6: int64x2 xor() 3317 MB/s Oct 29 00:24:22.221440 kernel: raid6: int64x1 gen() 5030 MB/s Oct 29 00:24:22.239082 kernel: raid6: int64x1 xor() 2640 MB/s Oct 29 00:24:22.239133 kernel: raid6: using algorithm neonx8 gen() 13740 MB/s Oct 29 00:24:22.239143 kernel: raid6: .... xor() 10808 MB/s, rmw enabled Oct 29 00:24:22.239163 kernel: raid6: using neon recovery algorithm Oct 29 00:24:22.251425 kernel: xor: measuring software checksum speed Oct 29 00:24:22.251460 kernel: 8regs : 17235 MB/sec Oct 29 00:24:22.252716 kernel: 32regs : 18456 MB/sec Oct 29 00:24:22.252734 kernel: arm64_neon : 26113 MB/sec Oct 29 00:24:22.252743 kernel: xor: using function: arm64_neon (26113 MB/sec) Oct 29 00:24:22.311444 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 29 00:24:22.323237 systemd[1]: Finished dracut-pre-udev.service. Oct 29 00:24:22.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:22.324000 audit: BPF prog-id=7 op=LOAD Oct 29 00:24:22.324000 audit: BPF prog-id=8 op=LOAD Oct 29 00:24:22.325283 systemd[1]: Starting systemd-udevd.service... Oct 29 00:24:22.338250 systemd-udevd[491]: Using default interface naming scheme 'v252'. Oct 29 00:24:22.341743 systemd[1]: Started systemd-udevd.service. Oct 29 00:24:22.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:22.343535 systemd[1]: Starting dracut-pre-trigger.service... Oct 29 00:24:22.356384 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Oct 29 00:24:22.388552 systemd[1]: Finished dracut-pre-trigger.service. Oct 29 00:24:22.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:22.390269 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 00:24:22.426158 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 00:24:22.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:22.458126 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 29 00:24:22.463761 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 00:24:22.463786 kernel: GPT:9289727 != 19775487 Oct 29 00:24:22.463796 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 00:24:22.463805 kernel: GPT:9289727 != 19775487 Oct 29 00:24:22.463813 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 00:24:22.463821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:24:22.476881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 29 00:24:22.480287 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 29 00:24:22.481450 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 29 00:24:22.487060 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (546) Oct 29 00:24:22.496720 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 29 00:24:22.500553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 00:24:22.502374 systemd[1]: Starting disk-uuid.service... Oct 29 00:24:22.522812 disk-uuid[563]: Primary Header is updated. Oct 29 00:24:22.522812 disk-uuid[563]: Secondary Entries is updated. Oct 29 00:24:22.522812 disk-uuid[563]: Secondary Header is updated. Oct 29 00:24:22.527414 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:24:22.530426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:24:23.533427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:24:23.533615 disk-uuid[564]: The operation has completed successfully. Oct 29 00:24:23.557160 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 00:24:23.557268 systemd[1]: Finished disk-uuid.service. Oct 29 00:24:23.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.562007 systemd[1]: Starting verity-setup.service... Oct 29 00:24:23.578419 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 29 00:24:23.603843 systemd[1]: Found device dev-mapper-usr.device. Oct 29 00:24:23.605549 systemd[1]: Mounting sysusr-usr.mount... Oct 29 00:24:23.606361 systemd[1]: Finished verity-setup.service. Oct 29 00:24:23.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.657356 systemd[1]: Mounted sysusr-usr.mount. Oct 29 00:24:23.658695 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 29 00:24:23.658190 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 29 00:24:23.659011 systemd[1]: Starting ignition-setup.service... Oct 29 00:24:23.661345 systemd[1]: Starting parse-ip-for-networkd.service... Oct 29 00:24:23.669445 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 00:24:23.669500 kernel: BTRFS info (device vda6): using free space tree Oct 29 00:24:23.669510 kernel: BTRFS info (device vda6): has skinny extents Oct 29 00:24:23.679660 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 29 00:24:23.729315 systemd[1]: Finished ignition-setup.service. Oct 29 00:24:23.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.731202 systemd[1]: Starting ignition-fetch-offline.service... Oct 29 00:24:23.741740 systemd[1]: Finished parse-ip-for-networkd.service. Oct 29 00:24:23.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.743000 audit: BPF prog-id=9 op=LOAD Oct 29 00:24:23.743998 systemd[1]: Starting systemd-networkd.service... Oct 29 00:24:23.765293 systemd-networkd[737]: lo: Link UP Oct 29 00:24:23.765307 systemd-networkd[737]: lo: Gained carrier Oct 29 00:24:23.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.765997 systemd-networkd[737]: Enumeration completed Oct 29 00:24:23.766132 systemd[1]: Started systemd-networkd.service. Oct 29 00:24:23.766454 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:24:23.767143 systemd[1]: Reached target network.target. Oct 29 00:24:23.768232 systemd-networkd[737]: eth0: Link UP Oct 29 00:24:23.768236 systemd-networkd[737]: eth0: Gained carrier Oct 29 00:24:23.769621 systemd[1]: Starting iscsiuio.service... Oct 29 00:24:23.777469 systemd[1]: Started iscsiuio.service. Oct 29 00:24:23.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.779527 systemd[1]: Starting iscsid.service... Oct 29 00:24:23.783855 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 29 00:24:23.783855 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 29 00:24:23.783855 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 29 00:24:23.783855 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 29 00:24:23.783855 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 29 00:24:23.783855 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 29 00:24:23.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.788140 systemd[1]: Started iscsid.service. Oct 29 00:24:23.792950 systemd[1]: Starting dracut-initqueue.service... Oct 29 00:24:23.795520 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:24:23.806225 systemd[1]: Finished dracut-initqueue.service. Oct 29 00:24:23.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.807264 systemd[1]: Reached target remote-fs-pre.target. Oct 29 00:24:23.808953 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 00:24:23.810496 systemd[1]: Reached target remote-fs.target. Oct 29 00:24:23.810795 ignition[731]: Ignition 2.14.0 Oct 29 00:24:23.812951 systemd[1]: Starting dracut-pre-mount.service... Oct 29 00:24:23.810803 ignition[731]: Stage: fetch-offline Oct 29 00:24:23.810853 ignition[731]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:24:23.810863 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:24:23.811001 ignition[731]: parsed url from cmdline: "" Oct 29 00:24:23.811004 ignition[731]: no config URL provided Oct 29 00:24:23.811009 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:24:23.811025 ignition[731]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:24:23.811049 ignition[731]: op(1): [started] loading QEMU firmware config module Oct 29 00:24:23.811054 ignition[731]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 00:24:23.821716 systemd[1]: Finished dracut-pre-mount.service. Oct 29 00:24:23.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.820942 ignition[731]: op(1): [finished] loading QEMU firmware config module Oct 29 00:24:23.865886 ignition[731]: parsing config with SHA512: 1f24f9297e145e045f3f79e0a6d8cb3608fe1b5472c95528aa95c5be492145094de408a23d52e56ef0084a776a3caa39349047ed57fc8603851bce7d34cad7ca Oct 29 00:24:23.874757 unknown[731]: fetched base config from "system" Oct 29 00:24:23.874772 unknown[731]: fetched user config from "qemu" Oct 29 00:24:23.875291 ignition[731]: fetch-offline: fetch-offline passed Oct 29 00:24:23.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.876684 systemd[1]: Finished ignition-fetch-offline.service. Oct 29 00:24:23.875349 ignition[731]: Ignition finished successfully Oct 29 00:24:23.878244 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 00:24:23.879127 systemd[1]: Starting ignition-kargs.service... Oct 29 00:24:23.889559 ignition[765]: Ignition 2.14.0 Oct 29 00:24:23.889567 ignition[765]: Stage: kargs Oct 29 00:24:23.889676 ignition[765]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:24:23.889685 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:24:23.890805 ignition[765]: kargs: kargs passed Oct 29 00:24:23.892966 systemd[1]: Finished ignition-kargs.service. Oct 29 00:24:23.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.890935 ignition[765]: Ignition finished successfully Oct 29 00:24:23.895063 systemd[1]: Starting ignition-disks.service... Oct 29 00:24:23.902805 ignition[771]: Ignition 2.14.0 Oct 29 00:24:23.902815 ignition[771]: Stage: disks Oct 29 00:24:23.902929 ignition[771]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:24:23.902939 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:24:23.903965 ignition[771]: disks: disks passed Oct 29 00:24:23.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.905788 systemd[1]: Finished ignition-disks.service. Oct 29 00:24:23.904026 ignition[771]: Ignition finished successfully Oct 29 00:24:23.907522 systemd[1]: Reached target initrd-root-device.target. Oct 29 00:24:23.908782 systemd[1]: Reached target local-fs-pre.target. Oct 29 00:24:23.910063 systemd[1]: Reached target local-fs.target. Oct 29 00:24:23.911300 systemd[1]: Reached target sysinit.target. Oct 29 00:24:23.912738 systemd[1]: Reached target basic.target. Oct 29 00:24:23.915003 systemd[1]: Starting systemd-fsck-root.service... Oct 29 00:24:23.927318 systemd-fsck[779]: ROOT: clean, 637/553520 files, 56031/553472 blocks Oct 29 00:24:23.965700 systemd[1]: Finished systemd-fsck-root.service. Oct 29 00:24:23.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:23.967947 systemd[1]: Mounting sysroot.mount... Oct 29 00:24:23.976420 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 29 00:24:23.976984 systemd[1]: Mounted sysroot.mount. Oct 29 00:24:23.977803 systemd[1]: Reached target initrd-root-fs.target. Oct 29 00:24:23.980345 systemd[1]: Mounting sysroot-usr.mount... Oct 29 00:24:23.981318 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 29 00:24:23.981364 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 00:24:23.981391 systemd[1]: Reached target ignition-diskful.target. Oct 29 00:24:23.983719 systemd[1]: Mounted sysroot-usr.mount. Oct 29 00:24:23.985851 systemd[1]: Starting initrd-setup-root.service... Oct 29 00:24:23.990642 initrd-setup-root[789]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 00:24:23.994797 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory Oct 29 00:24:23.999322 initrd-setup-root[805]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 00:24:24.003684 initrd-setup-root[813]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 00:24:24.041728 systemd[1]: Finished initrd-setup-root.service. Oct 29 00:24:24.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:24.043586 systemd[1]: Starting ignition-mount.service... Oct 29 00:24:24.045227 systemd[1]: Starting sysroot-boot.service... Oct 29 00:24:24.051189 bash[830]: umount: /sysroot/usr/share/oem: not mounted. Oct 29 00:24:24.061489 ignition[832]: INFO : Ignition 2.14.0 Oct 29 00:24:24.062527 ignition[832]: INFO : Stage: mount Oct 29 00:24:24.063457 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:24:24.064476 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:24:24.065621 ignition[832]: INFO : mount: mount passed Oct 29 00:24:24.066340 ignition[832]: INFO : Ignition finished successfully Oct 29 00:24:24.066466 systemd[1]: Finished ignition-mount.service. Oct 29 00:24:24.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:24.070097 systemd[1]: Finished sysroot-boot.service. Oct 29 00:24:24.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:24.615312 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 29 00:24:24.623366 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (841) Oct 29 00:24:24.623431 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 00:24:24.623442 kernel: BTRFS info (device vda6): using free space tree Oct 29 00:24:24.624796 kernel: BTRFS info (device vda6): has skinny extents Oct 29 00:24:24.629032 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 29 00:24:24.630831 systemd[1]: Starting ignition-files.service... Oct 29 00:24:24.646170 ignition[861]: INFO : Ignition 2.14.0 Oct 29 00:24:24.646170 ignition[861]: INFO : Stage: files Oct 29 00:24:24.647892 ignition[861]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:24:24.647892 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:24:24.647892 ignition[861]: DEBUG : files: compiled without relabeling support, skipping Oct 29 00:24:24.653371 ignition[861]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 00:24:24.653371 ignition[861]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 00:24:24.653371 ignition[861]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 00:24:24.653371 ignition[861]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 00:24:24.653371 ignition[861]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 00:24:24.653371 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 29 00:24:24.653371 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 29 00:24:24.653371 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 29 00:24:24.653371 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 29 00:24:24.652460 unknown[861]: wrote ssh authorized keys file for user: core Oct 29 00:24:24.706289 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 29 00:24:24.843033 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 29 00:24:24.845230 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 29 00:24:24.845230 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 29 00:24:25.025345 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 29 00:24:25.120402 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 29 00:24:25.120402 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 00:24:25.124180 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 29 00:24:25.449442 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 29 00:24:25.751678 systemd-networkd[737]: eth0: Gained IPv6LL Oct 29 00:24:25.757214 ignition[861]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 00:24:25.757214 ignition[861]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 00:24:25.760827 ignition[861]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:24:25.792087 ignition[861]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:24:25.792087 ignition[861]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 00:24:25.792087 ignition[861]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:24:25.792087 ignition[861]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:24:25.792087 ignition[861]: INFO : files: files passed Oct 29 00:24:25.792087 ignition[861]: INFO : Ignition finished successfully Oct 29 00:24:25.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.793044 systemd[1]: Finished ignition-files.service. Oct 29 00:24:25.795949 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 29 00:24:25.797149 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 29 00:24:25.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.809252 initrd-setup-root-after-ignition[886]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 29 00:24:25.798023 systemd[1]: Starting ignition-quench.service... Oct 29 00:24:25.812930 initrd-setup-root-after-ignition[888]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:24:25.801110 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 00:24:25.801211 systemd[1]: Finished ignition-quench.service. Oct 29 00:24:25.806224 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 29 00:24:25.807447 systemd[1]: Reached target ignition-complete.target. Oct 29 00:24:25.810776 systemd[1]: Starting initrd-parse-etc.service... Oct 29 00:24:25.825787 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 00:24:25.825894 systemd[1]: Finished initrd-parse-etc.service. Oct 29 00:24:25.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.827741 systemd[1]: Reached target initrd-fs.target. Oct 29 00:24:25.829092 systemd[1]: Reached target initrd.target. Oct 29 00:24:25.830457 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 29 00:24:25.831383 systemd[1]: Starting dracut-pre-pivot.service... Oct 29 00:24:25.843579 systemd[1]: Finished dracut-pre-pivot.service. Oct 29 00:24:25.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.845454 systemd[1]: Starting initrd-cleanup.service... Oct 29 00:24:25.855607 systemd[1]: Stopped target nss-lookup.target. Oct 29 00:24:25.856553 systemd[1]: Stopped target remote-cryptsetup.target. Oct 29 00:24:25.858180 systemd[1]: Stopped target timers.target. Oct 29 00:24:25.859666 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 00:24:25.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.859801 systemd[1]: Stopped dracut-pre-pivot.service. Oct 29 00:24:25.861463 systemd[1]: Stopped target initrd.target. Oct 29 00:24:25.863265 systemd[1]: Stopped target basic.target. Oct 29 00:24:25.864653 systemd[1]: Stopped target ignition-complete.target. Oct 29 00:24:25.866147 systemd[1]: Stopped target ignition-diskful.target. Oct 29 00:24:25.867688 systemd[1]: Stopped target initrd-root-device.target. Oct 29 00:24:25.869394 systemd[1]: Stopped target remote-fs.target. Oct 29 00:24:25.870971 systemd[1]: Stopped target remote-fs-pre.target. Oct 29 00:24:25.872600 systemd[1]: Stopped target sysinit.target. Oct 29 00:24:25.873975 systemd[1]: Stopped target local-fs.target. Oct 29 00:24:25.875430 systemd[1]: Stopped target local-fs-pre.target. Oct 29 00:24:25.876843 systemd[1]: Stopped target swap.target. Oct 29 00:24:25.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.878145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 00:24:25.878273 systemd[1]: Stopped dracut-pre-mount.service. Oct 29 00:24:25.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.879783 systemd[1]: Stopped target cryptsetup.target. Oct 29 00:24:25.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.881226 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 00:24:25.881350 systemd[1]: Stopped dracut-initqueue.service. Oct 29 00:24:25.883078 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 00:24:25.883189 systemd[1]: Stopped ignition-fetch-offline.service. Oct 29 00:24:25.884631 systemd[1]: Stopped target paths.target. Oct 29 00:24:25.885953 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 00:24:25.890599 systemd[1]: Stopped systemd-ask-password-console.path. Oct 29 00:24:25.891826 systemd[1]: Stopped target slices.target. Oct 29 00:24:25.893460 systemd[1]: Stopped target sockets.target. Oct 29 00:24:25.895005 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 00:24:25.895081 systemd[1]: Closed iscsid.socket. Oct 29 00:24:25.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.896254 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 00:24:25.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.896318 systemd[1]: Closed iscsiuio.socket. Oct 29 00:24:25.897722 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 00:24:25.897834 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 29 00:24:25.899503 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 00:24:25.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.899609 systemd[1]: Stopped ignition-files.service. Oct 29 00:24:25.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.901812 systemd[1]: Stopping ignition-mount.service... Oct 29 00:24:25.910270 ignition[901]: INFO : Ignition 2.14.0 Oct 29 00:24:25.910270 ignition[901]: INFO : Stage: umount Oct 29 00:24:25.910270 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:24:25.910270 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:24:25.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.904052 systemd[1]: Stopping sysroot-boot.service... Oct 29 00:24:25.917339 ignition[901]: INFO : umount: umount passed Oct 29 00:24:25.917339 ignition[901]: INFO : Ignition finished successfully Oct 29 00:24:25.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.904840 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 00:24:25.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.904975 systemd[1]: Stopped systemd-udev-trigger.service. Oct 29 00:24:25.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.906844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 00:24:25.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.906942 systemd[1]: Stopped dracut-pre-trigger.service. Oct 29 00:24:25.912629 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 00:24:25.912730 systemd[1]: Finished initrd-cleanup.service. Oct 29 00:24:25.915950 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 00:24:25.916389 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 00:24:25.916491 systemd[1]: Stopped ignition-mount.service. Oct 29 00:24:25.918150 systemd[1]: Stopped target network.target. Oct 29 00:24:25.919453 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 00:24:25.919516 systemd[1]: Stopped ignition-disks.service. Oct 29 00:24:25.920966 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 00:24:25.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.921020 systemd[1]: Stopped ignition-kargs.service. Oct 29 00:24:25.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.922613 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 00:24:25.922652 systemd[1]: Stopped ignition-setup.service. Oct 29 00:24:25.924494 systemd[1]: Stopping systemd-networkd.service... Oct 29 00:24:25.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.925884 systemd[1]: Stopping systemd-resolved.service... Oct 29 00:24:25.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.943000 audit: BPF prog-id=6 op=UNLOAD Oct 29 00:24:25.933568 systemd-networkd[737]: eth0: DHCPv6 lease lost Oct 29 00:24:25.944000 audit: BPF prog-id=9 op=UNLOAD Oct 29 00:24:25.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.934606 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 00:24:25.934704 systemd[1]: Stopped systemd-resolved.service. Oct 29 00:24:25.936429 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 00:24:25.936525 systemd[1]: Stopped systemd-networkd.service. Oct 29 00:24:25.937898 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 00:24:25.937925 systemd[1]: Closed systemd-networkd.socket. Oct 29 00:24:25.939812 systemd[1]: Stopping network-cleanup.service... Oct 29 00:24:25.940583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 00:24:25.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.940641 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 29 00:24:25.942612 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:24:25.942657 systemd[1]: Stopped systemd-sysctl.service. Oct 29 00:24:25.944667 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 00:24:25.944713 systemd[1]: Stopped systemd-modules-load.service. Oct 29 00:24:25.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.945689 systemd[1]: Stopping systemd-udevd.service... Oct 29 00:24:25.951483 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 29 00:24:25.954317 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 00:24:25.969176 kernel: kauditd_printk_skb: 54 callbacks suppressed Oct 29 00:24:25.969201 kernel: audit: type=1131 audit(1761697465.964:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.954536 systemd[1]: Stopped network-cleanup.service. Oct 29 00:24:25.972926 kernel: audit: type=1131 audit(1761697465.969:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.959203 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 00:24:25.977099 kernel: audit: type=1131 audit(1761697465.973:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.959330 systemd[1]: Stopped systemd-udevd.service. Oct 29 00:24:25.961077 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 00:24:25.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.961110 systemd[1]: Closed systemd-udevd-control.socket. Oct 29 00:24:25.987445 kernel: audit: type=1131 audit(1761697465.979:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.987472 kernel: audit: type=1131 audit(1761697465.984:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.962219 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 00:24:25.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.962251 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 29 00:24:25.998770 kernel: audit: type=1131 audit(1761697465.988:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.998794 kernel: audit: type=1130 audit(1761697465.992:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.998815 kernel: audit: type=1131 audit(1761697465.992:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:25.963619 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 00:24:25.963666 systemd[1]: Stopped dracut-pre-udev.service. Oct 29 00:24:25.965406 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 00:24:25.965449 systemd[1]: Stopped dracut-cmdline.service. Oct 29 00:24:25.969858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 00:24:25.969904 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 29 00:24:25.974517 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 29 00:24:25.977819 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 00:24:25.977894 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 29 00:24:25.983196 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 00:24:25.983257 systemd[1]: Stopped kmod-static-nodes.service. Oct 29 00:24:25.984182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:24:25.984233 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 29 00:24:25.989104 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 29 00:24:25.989581 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 00:24:25.989672 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 29 00:24:26.045344 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 00:24:26.045463 systemd[1]: Stopped sysroot-boot.service. Oct 29 00:24:26.050564 kernel: audit: type=1131 audit(1761697466.046:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:26.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:26.047066 systemd[1]: Reached target initrd-switch-root.target. Oct 29 00:24:26.051320 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 00:24:26.056452 kernel: audit: type=1131 audit(1761697466.052:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:26.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:26.051412 systemd[1]: Stopped initrd-setup-root.service. Oct 29 00:24:26.054081 systemd[1]: Starting initrd-switch-root.service... Oct 29 00:24:26.061179 systemd[1]: Switching root. Oct 29 00:24:26.062000 audit: BPF prog-id=5 op=UNLOAD Oct 29 00:24:26.062000 audit: BPF prog-id=4 op=UNLOAD Oct 29 00:24:26.062000 audit: BPF prog-id=3 op=UNLOAD Oct 29 00:24:26.063000 audit: BPF prog-id=8 op=UNLOAD Oct 29 00:24:26.063000 audit: BPF prog-id=7 op=UNLOAD Oct 29 00:24:26.080280 iscsid[748]: iscsid shutting down. Oct 29 00:24:26.081011 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Oct 29 00:24:26.081068 systemd-journald[290]: Journal stopped Oct 29 00:24:28.269806 kernel: SELinux: Class mctp_socket not defined in policy. Oct 29 00:24:28.269864 kernel: SELinux: Class anon_inode not defined in policy. Oct 29 00:24:28.269875 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 29 00:24:28.269887 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 00:24:28.269897 kernel: SELinux: policy capability open_perms=1 Oct 29 00:24:28.269906 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 00:24:28.269915 kernel: SELinux: policy capability always_check_network=0 Oct 29 00:24:28.269925 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 00:24:28.269938 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 00:24:28.269954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 00:24:28.269964 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 00:24:28.269983 systemd[1]: Successfully loaded SELinux policy in 40.039ms. Oct 29 00:24:28.270007 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.555ms. Oct 29 00:24:28.270020 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 00:24:28.270032 systemd[1]: Detected virtualization kvm. Oct 29 00:24:28.270044 systemd[1]: Detected architecture arm64. Oct 29 00:24:28.270054 systemd[1]: Detected first boot. Oct 29 00:24:28.270065 systemd[1]: Initializing machine ID from VM UUID. Oct 29 00:24:28.270076 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 29 00:24:28.270089 systemd[1]: Populated /etc with preset unit settings. Oct 29 00:24:28.270100 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:24:28.270112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:24:28.270123 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:24:28.270135 systemd[1]: Queued start job for default target multi-user.target. Oct 29 00:24:28.270145 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 29 00:24:28.270157 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 29 00:24:28.270167 systemd[1]: Created slice system-addon\x2drun.slice. Oct 29 00:24:28.270178 systemd[1]: Created slice system-getty.slice. Oct 29 00:24:28.270188 systemd[1]: Created slice system-modprobe.slice. Oct 29 00:24:28.270199 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 29 00:24:28.270209 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 29 00:24:28.270220 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 29 00:24:28.270230 systemd[1]: Created slice user.slice. Oct 29 00:24:28.270241 systemd[1]: Started systemd-ask-password-console.path. Oct 29 00:24:28.270256 systemd[1]: Started systemd-ask-password-wall.path. Oct 29 00:24:28.270267 systemd[1]: Set up automount boot.automount. Oct 29 00:24:28.270277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 29 00:24:28.270287 systemd[1]: Reached target integritysetup.target. Oct 29 00:24:28.270298 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 00:24:28.270308 systemd[1]: Reached target remote-fs.target. Oct 29 00:24:28.270319 systemd[1]: Reached target slices.target. Oct 29 00:24:28.270329 systemd[1]: Reached target swap.target. Oct 29 00:24:28.270340 systemd[1]: Reached target torcx.target. Oct 29 00:24:28.270351 systemd[1]: Reached target veritysetup.target. Oct 29 00:24:28.270361 systemd[1]: Listening on systemd-coredump.socket. Oct 29 00:24:28.270371 systemd[1]: Listening on systemd-initctl.socket. Oct 29 00:24:28.270381 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 00:24:28.270391 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 00:24:28.270423 systemd[1]: Listening on systemd-journald.socket. Oct 29 00:24:28.270434 systemd[1]: Listening on systemd-networkd.socket. Oct 29 00:24:28.270444 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 00:24:28.270455 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 00:24:28.270466 systemd[1]: Listening on systemd-userdbd.socket. Oct 29 00:24:28.270477 systemd[1]: Mounting dev-hugepages.mount... Oct 29 00:24:28.270490 systemd[1]: Mounting dev-mqueue.mount... Oct 29 00:24:28.270499 systemd[1]: Mounting media.mount... Oct 29 00:24:28.270509 systemd[1]: Mounting sys-kernel-debug.mount... Oct 29 00:24:28.270519 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 29 00:24:28.270530 systemd[1]: Mounting tmp.mount... Oct 29 00:24:28.270540 systemd[1]: Starting flatcar-tmpfiles.service... Oct 29 00:24:28.270551 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:24:28.270562 systemd[1]: Starting kmod-static-nodes.service... Oct 29 00:24:28.270572 systemd[1]: Starting modprobe@configfs.service... Oct 29 00:24:28.270582 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:24:28.270592 systemd[1]: Starting modprobe@drm.service... Oct 29 00:24:28.270602 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:24:28.270612 systemd[1]: Starting modprobe@fuse.service... Oct 29 00:24:28.270622 systemd[1]: Starting modprobe@loop.service... Oct 29 00:24:28.270633 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 00:24:28.270643 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 29 00:24:28.270655 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 29 00:24:28.270665 systemd[1]: Starting systemd-journald.service... Oct 29 00:24:28.270675 kernel: loop: module loaded Oct 29 00:24:28.270687 systemd[1]: Starting systemd-modules-load.service... Oct 29 00:24:28.270701 kernel: fuse: init (API version 7.34) Oct 29 00:24:28.270711 systemd[1]: Starting systemd-network-generator.service... Oct 29 00:24:28.270721 systemd[1]: Starting systemd-remount-fs.service... Oct 29 00:24:28.270731 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 00:24:28.270741 systemd[1]: Mounted dev-hugepages.mount. Oct 29 00:24:28.270756 systemd-journald[1041]: Journal started Oct 29 00:24:28.270803 systemd-journald[1041]: Runtime Journal (/run/log/journal/27a5fa4892f04523b9d56a7cfff4b5e9) is 6.0M, max 48.7M, 42.6M free. Oct 29 00:24:28.268000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 29 00:24:28.268000 audit[1041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff7faba60 a2=4000 a3=1 items=0 ppid=1 pid=1041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 00:24:28.268000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 29 00:24:28.273018 systemd[1]: Started systemd-journald.service. Oct 29 00:24:28.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.273994 systemd[1]: Mounted dev-mqueue.mount. Oct 29 00:24:28.274898 systemd[1]: Mounted media.mount. Oct 29 00:24:28.275681 systemd[1]: Mounted sys-kernel-debug.mount. Oct 29 00:24:28.276596 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 29 00:24:28.277478 systemd[1]: Mounted tmp.mount. Oct 29 00:24:28.278589 systemd[1]: Finished kmod-static-nodes.service. Oct 29 00:24:28.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.279760 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 00:24:28.280013 systemd[1]: Finished modprobe@configfs.service. Oct 29 00:24:28.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.281241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:24:28.281653 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:24:28.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.283077 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:24:28.283295 systemd[1]: Finished modprobe@drm.service. Oct 29 00:24:28.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.284508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:24:28.285774 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:24:28.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.287040 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 00:24:28.287273 systemd[1]: Finished modprobe@fuse.service. Oct 29 00:24:28.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.288527 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:24:28.288829 systemd[1]: Finished modprobe@loop.service. Oct 29 00:24:28.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.290424 systemd[1]: Finished flatcar-tmpfiles.service. Oct 29 00:24:28.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.291778 systemd[1]: Finished systemd-modules-load.service. Oct 29 00:24:28.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.293444 systemd[1]: Finished systemd-network-generator.service. Oct 29 00:24:28.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.295225 systemd[1]: Finished systemd-remount-fs.service. Oct 29 00:24:28.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.296912 systemd[1]: Reached target network-pre.target. Oct 29 00:24:28.299419 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 29 00:24:28.301541 systemd[1]: Mounting sys-kernel-config.mount... Oct 29 00:24:28.302360 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 00:24:28.304291 systemd[1]: Starting systemd-hwdb-update.service... Oct 29 00:24:28.306728 systemd[1]: Starting systemd-journal-flush.service... Oct 29 00:24:28.307772 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:24:28.309182 systemd[1]: Starting systemd-random-seed.service... Oct 29 00:24:28.310254 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 00:24:28.311777 systemd[1]: Starting systemd-sysctl.service... Oct 29 00:24:28.314335 systemd[1]: Starting systemd-sysusers.service... Oct 29 00:24:28.318687 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 00:24:28.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.320595 systemd-journald[1041]: Time spent on flushing to /var/log/journal/27a5fa4892f04523b9d56a7cfff4b5e9 is 13.474ms for 934 entries. Oct 29 00:24:28.320595 systemd-journald[1041]: System Journal (/var/log/journal/27a5fa4892f04523b9d56a7cfff4b5e9) is 8.0M, max 195.6M, 187.6M free. Oct 29 00:24:28.347035 systemd-journald[1041]: Received client request to flush runtime journal. Oct 29 00:24:28.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.319990 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 29 00:24:28.322188 systemd[1]: Mounted sys-kernel-config.mount. Oct 29 00:24:28.348084 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 29 00:24:28.325305 systemd[1]: Starting systemd-udev-settle.service... Oct 29 00:24:28.326770 systemd[1]: Finished systemd-random-seed.service. Oct 29 00:24:28.327877 systemd[1]: Reached target first-boot-complete.target. Oct 29 00:24:28.333347 systemd[1]: Finished systemd-sysctl.service. Oct 29 00:24:28.344003 systemd[1]: Finished systemd-sysusers.service. Oct 29 00:24:28.346441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 00:24:28.350387 systemd[1]: Finished systemd-journal-flush.service. Oct 29 00:24:28.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.363716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 00:24:28.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.821992 systemd[1]: Finished systemd-hwdb-update.service. Oct 29 00:24:28.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.824279 systemd[1]: Starting systemd-udevd.service... Oct 29 00:24:28.840786 systemd-udevd[1093]: Using default interface naming scheme 'v252'. Oct 29 00:24:28.855630 systemd[1]: Started systemd-udevd.service. Oct 29 00:24:28.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.858304 systemd[1]: Starting systemd-networkd.service... Oct 29 00:24:28.865152 systemd[1]: Starting systemd-userdbd.service... Oct 29 00:24:28.880267 systemd[1]: Found device dev-ttyAMA0.device. Oct 29 00:24:28.894198 systemd[1]: Started systemd-userdbd.service. Oct 29 00:24:28.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.938108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 00:24:28.944142 systemd-networkd[1101]: lo: Link UP Oct 29 00:24:28.944152 systemd-networkd[1101]: lo: Gained carrier Oct 29 00:24:28.944547 systemd-networkd[1101]: Enumeration completed Oct 29 00:24:28.944666 systemd-networkd[1101]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:24:28.944697 systemd[1]: Started systemd-networkd.service. Oct 29 00:24:28.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.946054 systemd-networkd[1101]: eth0: Link UP Oct 29 00:24:28.946064 systemd-networkd[1101]: eth0: Gained carrier Oct 29 00:24:28.960635 systemd-networkd[1101]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:24:28.983919 systemd[1]: Finished systemd-udev-settle.service. Oct 29 00:24:28.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:28.986485 systemd[1]: Starting lvm2-activation-early.service... Oct 29 00:24:28.995435 lvm[1127]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 00:24:29.020416 systemd[1]: Finished lvm2-activation-early.service. Oct 29 00:24:29.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.021464 systemd[1]: Reached target cryptsetup.target. Oct 29 00:24:29.023643 systemd[1]: Starting lvm2-activation.service... Oct 29 00:24:29.027554 lvm[1129]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 00:24:29.060465 systemd[1]: Finished lvm2-activation.service. Oct 29 00:24:29.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.061425 systemd[1]: Reached target local-fs-pre.target. Oct 29 00:24:29.062263 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 00:24:29.062298 systemd[1]: Reached target local-fs.target. Oct 29 00:24:29.063109 systemd[1]: Reached target machines.target. Oct 29 00:24:29.065299 systemd[1]: Starting ldconfig.service... Oct 29 00:24:29.066486 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.066551 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:24:29.068024 systemd[1]: Starting systemd-boot-update.service... Oct 29 00:24:29.070201 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 29 00:24:29.072891 systemd[1]: Starting systemd-machine-id-commit.service... Oct 29 00:24:29.075329 systemd[1]: Starting systemd-sysext.service... Oct 29 00:24:29.076683 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1132 (bootctl) Oct 29 00:24:29.077943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 29 00:24:29.085483 systemd[1]: Unmounting usr-share-oem.mount... Oct 29 00:24:29.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.089949 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 29 00:24:29.091652 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 29 00:24:29.091940 systemd[1]: Unmounted usr-share-oem.mount. Oct 29 00:24:29.176439 kernel: loop0: detected capacity change from 0 to 207008 Oct 29 00:24:29.198434 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 00:24:29.212579 systemd-fsck[1144]: fsck.fat 4.2 (2021-01-31) Oct 29 00:24:29.212579 systemd-fsck[1144]: /dev/vda1: 236 files, 117310/258078 clusters Oct 29 00:24:29.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.215165 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 29 00:24:29.218125 systemd[1]: Mounting boot.mount... Oct 29 00:24:29.220472 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 00:24:29.222525 systemd[1]: Finished systemd-machine-id-commit.service. Oct 29 00:24:29.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.228155 systemd[1]: Mounted boot.mount. Oct 29 00:24:29.229417 kernel: loop1: detected capacity change from 0 to 207008 Oct 29 00:24:29.236807 systemd[1]: Finished systemd-boot-update.service. Oct 29 00:24:29.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.238087 (sd-sysext)[1152]: Using extensions 'kubernetes'. Oct 29 00:24:29.238926 (sd-sysext)[1152]: Merged extensions into '/usr'. Oct 29 00:24:29.258648 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.260213 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:24:29.262812 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:24:29.265123 systemd[1]: Starting modprobe@loop.service... Oct 29 00:24:29.266086 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.266226 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:24:29.267132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:24:29.267323 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:24:29.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.270320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:24:29.270543 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:24:29.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.272157 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:24:29.272339 systemd[1]: Finished modprobe@loop.service. Oct 29 00:24:29.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.275829 systemd[1]: Mounting usr-share-oem.mount... Oct 29 00:24:29.276740 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:24:29.276908 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.282128 systemd[1]: Mounted usr-share-oem.mount. Oct 29 00:24:29.284501 systemd[1]: Finished systemd-sysext.service. Oct 29 00:24:29.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.287123 systemd[1]: Starting ensure-sysext.service... Oct 29 00:24:29.289212 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 29 00:24:29.294546 systemd[1]: Reloading. Oct 29 00:24:29.300924 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 29 00:24:29.302120 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 00:24:29.303987 systemd-tmpfiles[1167]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 00:24:29.332108 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-10-29T00:24:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 00:24:29.332140 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-10-29T00:24:29Z" level=info msg="torcx already run" Oct 29 00:24:29.368163 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 00:24:29.410060 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:24:29.410081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:24:29.427991 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:24:29.471166 systemd[1]: Finished ldconfig.service. Oct 29 00:24:29.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.473272 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 29 00:24:29.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.476721 systemd[1]: Starting audit-rules.service... Oct 29 00:24:29.478931 systemd[1]: Starting clean-ca-certificates.service... Oct 29 00:24:29.481011 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 29 00:24:29.483538 systemd[1]: Starting systemd-resolved.service... Oct 29 00:24:29.485816 systemd[1]: Starting systemd-timesyncd.service... Oct 29 00:24:29.489075 systemd[1]: Starting systemd-update-utmp.service... Oct 29 00:24:29.490638 systemd[1]: Finished clean-ca-certificates.service. Oct 29 00:24:29.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.493796 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:24:29.494000 audit[1245]: SYSTEM_BOOT pid=1245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.497200 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.498935 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:24:29.501143 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:24:29.503205 systemd[1]: Starting modprobe@loop.service... Oct 29 00:24:29.504187 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.504367 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:24:29.504567 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:24:29.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.505927 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 29 00:24:29.507507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:24:29.507672 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:24:29.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.509172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:24:29.509328 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:24:29.510792 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:24:29.510995 systemd[1]: Finished modprobe@loop.service. Oct 29 00:24:29.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.514680 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.516100 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:24:29.518389 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:24:29.520533 systemd[1]: Starting modprobe@loop.service... Oct 29 00:24:29.521323 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.521560 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:24:29.523063 systemd[1]: Starting systemd-update-done.service... Oct 29 00:24:29.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.527382 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:24:29.528842 systemd[1]: Finished systemd-update-utmp.service. Oct 29 00:24:29.530262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:24:29.530518 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:24:29.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.531800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:24:29.531951 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:24:29.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.533392 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:24:29.533575 systemd[1]: Finished modprobe@loop.service. Oct 29 00:24:29.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.534820 systemd[1]: Finished systemd-update-done.service. Oct 29 00:24:29.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:24:29.539084 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.540756 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:24:29.543103 systemd[1]: Starting modprobe@drm.service... Oct 29 00:24:29.545097 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:24:29.547224 systemd[1]: Starting modprobe@loop.service... Oct 29 00:24:29.548111 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.548245 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:24:29.549604 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 29 00:24:29.551005 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:24:29.551000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 29 00:24:29.551000 audit[1275]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd5eb5750 a2=420 a3=0 items=0 ppid=1233 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 00:24:29.551000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 29 00:24:29.552327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:24:29.553694 augenrules[1275]: No rules Oct 29 00:24:29.552519 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:24:29.554007 systemd[1]: Finished audit-rules.service. Oct 29 00:24:29.555144 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:24:29.555318 systemd[1]: Finished modprobe@drm.service. Oct 29 00:24:29.556831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:24:29.556982 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:24:29.558482 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:24:29.558657 systemd[1]: Finished modprobe@loop.service. Oct 29 00:24:29.560040 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:24:29.560146 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.563731 systemd[1]: Finished ensure-sysext.service. Oct 29 00:24:29.577744 systemd[1]: Started systemd-timesyncd.service. Oct 29 00:24:29.578579 systemd-timesyncd[1244]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 00:24:29.578633 systemd-timesyncd[1244]: Initial clock synchronization to Wed 2025-10-29 00:24:29.286042 UTC. Oct 29 00:24:29.579130 systemd[1]: Reached target time-set.target. Oct 29 00:24:29.579904 systemd-resolved[1241]: Positive Trust Anchors: Oct 29 00:24:29.580163 systemd-resolved[1241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:24:29.580241 systemd-resolved[1241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 00:24:29.588756 systemd-resolved[1241]: Defaulting to hostname 'linux'. Oct 29 00:24:29.590434 systemd[1]: Started systemd-resolved.service. Oct 29 00:24:29.591294 systemd[1]: Reached target network.target. Oct 29 00:24:29.592125 systemd[1]: Reached target nss-lookup.target. Oct 29 00:24:29.593020 systemd[1]: Reached target sysinit.target. Oct 29 00:24:29.593883 systemd[1]: Started motdgen.path. Oct 29 00:24:29.594608 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 29 00:24:29.595828 systemd[1]: Started logrotate.timer. Oct 29 00:24:29.596653 systemd[1]: Started mdadm.timer. Oct 29 00:24:29.597304 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 29 00:24:29.598171 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 00:24:29.598212 systemd[1]: Reached target paths.target. Oct 29 00:24:29.598960 systemd[1]: Reached target timers.target. Oct 29 00:24:29.600087 systemd[1]: Listening on dbus.socket. Oct 29 00:24:29.602095 systemd[1]: Starting docker.socket... Oct 29 00:24:29.604287 systemd[1]: Listening on sshd.socket. Oct 29 00:24:29.605224 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:24:29.605618 systemd[1]: Listening on docker.socket. Oct 29 00:24:29.606379 systemd[1]: Reached target sockets.target. Oct 29 00:24:29.607131 systemd[1]: Reached target basic.target. Oct 29 00:24:29.608075 systemd[1]: System is tainted: cgroupsv1 Oct 29 00:24:29.608129 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.608154 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 00:24:29.609329 systemd[1]: Starting containerd.service... Oct 29 00:24:29.611256 systemd[1]: Starting dbus.service... Oct 29 00:24:29.613108 systemd[1]: Starting enable-oem-cloudinit.service... Oct 29 00:24:29.615182 systemd[1]: Starting extend-filesystems.service... Oct 29 00:24:29.616238 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 29 00:24:29.617517 systemd[1]: Starting motdgen.service... Oct 29 00:24:29.618104 jq[1296]: false Oct 29 00:24:29.619338 systemd[1]: Starting prepare-helm.service... Oct 29 00:24:29.622209 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 29 00:24:29.624239 systemd[1]: Starting sshd-keygen.service... Oct 29 00:24:29.626774 systemd[1]: Starting systemd-logind.service... Oct 29 00:24:29.627646 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:24:29.627740 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 00:24:29.629032 systemd[1]: Starting update-engine.service... Oct 29 00:24:29.630960 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 29 00:24:29.633815 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 00:24:29.634084 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 29 00:24:29.634782 jq[1311]: true Oct 29 00:24:29.635249 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 00:24:29.635545 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 29 00:24:29.648517 jq[1319]: true Oct 29 00:24:29.648669 tar[1313]: linux-arm64/LICENSE Oct 29 00:24:29.648896 tar[1313]: linux-arm64/helm Oct 29 00:24:29.651082 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 00:24:29.651329 systemd[1]: Finished motdgen.service. Oct 29 00:24:29.659456 extend-filesystems[1297]: Found loop1 Oct 29 00:24:29.659456 extend-filesystems[1297]: Found vda Oct 29 00:24:29.661311 extend-filesystems[1297]: Found vda1 Oct 29 00:24:29.661311 extend-filesystems[1297]: Found vda2 Oct 29 00:24:29.661311 extend-filesystems[1297]: Found vda3 Oct 29 00:24:29.661311 extend-filesystems[1297]: Found usr Oct 29 00:24:29.661311 extend-filesystems[1297]: Found vda4 Oct 29 00:24:29.661311 extend-filesystems[1297]: Found vda6 Oct 29 00:24:29.661311 extend-filesystems[1297]: Found vda7 Oct 29 00:24:29.661311 extend-filesystems[1297]: Found vda9 Oct 29 00:24:29.661311 extend-filesystems[1297]: Checking size of /dev/vda9 Oct 29 00:24:29.685544 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 29 00:24:29.672764 dbus-daemon[1295]: [system] SELinux support is enabled Oct 29 00:24:29.674508 systemd[1]: Started dbus.service. Oct 29 00:24:29.685873 extend-filesystems[1297]: Resized partition /dev/vda9 Oct 29 00:24:29.680857 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 00:24:29.687126 extend-filesystems[1350]: resize2fs 1.46.5 (30-Dec-2021) Oct 29 00:24:29.680881 systemd[1]: Reached target system-config.target. Oct 29 00:24:29.682407 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 00:24:29.682425 systemd[1]: Reached target user-config.target. Oct 29 00:24:29.708556 update_engine[1310]: I1029 00:24:29.705790 1310 main.cc:92] Flatcar Update Engine starting Oct 29 00:24:29.709758 systemd-logind[1308]: Watching system buttons on /dev/input/event0 (Power Button) Oct 29 00:24:29.710606 systemd[1]: Started update-engine.service. Oct 29 00:24:29.711045 update_engine[1310]: I1029 00:24:29.710736 1310 update_check_scheduler.cc:74] Next update check in 4m4s Oct 29 00:24:29.711071 systemd-logind[1308]: New seat seat0. Oct 29 00:24:29.713661 systemd[1]: Started locksmithd.service. Oct 29 00:24:29.714941 systemd[1]: Started systemd-logind.service. Oct 29 00:24:29.726460 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 29 00:24:29.752413 extend-filesystems[1350]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 00:24:29.752413 extend-filesystems[1350]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 00:24:29.752413 extend-filesystems[1350]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 29 00:24:29.750863 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 00:24:29.757091 bash[1352]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:24:29.757190 extend-filesystems[1297]: Resized filesystem in /dev/vda9 Oct 29 00:24:29.751135 systemd[1]: Finished extend-filesystems.service. Oct 29 00:24:29.757185 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 29 00:24:29.762103 env[1321]: time="2025-10-29T00:24:29.762025720Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 29 00:24:29.788056 env[1321]: time="2025-10-29T00:24:29.787997560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 29 00:24:29.788199 env[1321]: time="2025-10-29T00:24:29.788176280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:24:29.789753 env[1321]: time="2025-10-29T00:24:29.789708840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 29 00:24:29.789753 env[1321]: time="2025-10-29T00:24:29.789749560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:24:29.790073 env[1321]: time="2025-10-29T00:24:29.790046440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 00:24:29.790073 env[1321]: time="2025-10-29T00:24:29.790070560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 29 00:24:29.790137 env[1321]: time="2025-10-29T00:24:29.790085720Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 29 00:24:29.790137 env[1321]: time="2025-10-29T00:24:29.790096000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 29 00:24:29.790193 env[1321]: time="2025-10-29T00:24:29.790175400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:24:29.790430 env[1321]: time="2025-10-29T00:24:29.790388880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:24:29.790745 env[1321]: time="2025-10-29T00:24:29.790718560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 00:24:29.790790 env[1321]: time="2025-10-29T00:24:29.790744240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 29 00:24:29.790834 env[1321]: time="2025-10-29T00:24:29.790812040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 29 00:24:29.790834 env[1321]: time="2025-10-29T00:24:29.790828840Z" level=info msg="metadata content store policy set" policy=shared Oct 29 00:24:29.796307 locksmithd[1354]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 00:24:29.796846 env[1321]: time="2025-10-29T00:24:29.796806680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 29 00:24:29.796897 env[1321]: time="2025-10-29T00:24:29.796852920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 29 00:24:29.796897 env[1321]: time="2025-10-29T00:24:29.796866400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 29 00:24:29.796936 env[1321]: time="2025-10-29T00:24:29.796904720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.796936 env[1321]: time="2025-10-29T00:24:29.796920520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.797009 env[1321]: time="2025-10-29T00:24:29.796936000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.797009 env[1321]: time="2025-10-29T00:24:29.796949480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.797392 env[1321]: time="2025-10-29T00:24:29.797363960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.797449 env[1321]: time="2025-10-29T00:24:29.797392840Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.797449 env[1321]: time="2025-10-29T00:24:29.797428360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.797449 env[1321]: time="2025-10-29T00:24:29.797441080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.797516 env[1321]: time="2025-10-29T00:24:29.797455040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 29 00:24:29.797621 env[1321]: time="2025-10-29T00:24:29.797597400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 29 00:24:29.797705 env[1321]: time="2025-10-29T00:24:29.797686480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 29 00:24:29.798092 env[1321]: time="2025-10-29T00:24:29.798070720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 29 00:24:29.798126 env[1321]: time="2025-10-29T00:24:29.798102640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798126 env[1321]: time="2025-10-29T00:24:29.798122360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 29 00:24:29.798252 env[1321]: time="2025-10-29T00:24:29.798235600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798284 env[1321]: time="2025-10-29T00:24:29.798253040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798284 env[1321]: time="2025-10-29T00:24:29.798268600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798358 env[1321]: time="2025-10-29T00:24:29.798281480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798392 env[1321]: time="2025-10-29T00:24:29.798359200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798392 env[1321]: time="2025-10-29T00:24:29.798372560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798392 env[1321]: time="2025-10-29T00:24:29.798384080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798392 env[1321]: time="2025-10-29T00:24:29.798403800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798504 env[1321]: time="2025-10-29T00:24:29.798418880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 29 00:24:29.798579 env[1321]: time="2025-10-29T00:24:29.798557120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798612 env[1321]: time="2025-10-29T00:24:29.798580000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798612 env[1321]: time="2025-10-29T00:24:29.798594000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798612 env[1321]: time="2025-10-29T00:24:29.798606440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 29 00:24:29.798668 env[1321]: time="2025-10-29T00:24:29.798621240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 29 00:24:29.798668 env[1321]: time="2025-10-29T00:24:29.798631680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 29 00:24:29.798668 env[1321]: time="2025-10-29T00:24:29.798654320Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 29 00:24:29.798765 env[1321]: time="2025-10-29T00:24:29.798689600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 29 00:24:29.798936 env[1321]: time="2025-10-29T00:24:29.798883200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 29 00:24:29.799570 env[1321]: time="2025-10-29T00:24:29.798942800Z" level=info msg="Connect containerd service" Oct 29 00:24:29.799570 env[1321]: time="2025-10-29T00:24:29.798984200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 29 00:24:29.799861 env[1321]: time="2025-10-29T00:24:29.799826680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:24:29.800117 env[1321]: time="2025-10-29T00:24:29.800061560Z" level=info msg="Start subscribing containerd event" Oct 29 00:24:29.800159 env[1321]: time="2025-10-29T00:24:29.800137240Z" level=info msg="Start recovering state" Oct 29 00:24:29.800238 env[1321]: time="2025-10-29T00:24:29.800216040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 00:24:29.800238 env[1321]: time="2025-10-29T00:24:29.800229880Z" level=info msg="Start event monitor" Oct 29 00:24:29.800287 env[1321]: time="2025-10-29T00:24:29.800256280Z" level=info msg="Start snapshots syncer" Oct 29 00:24:29.800287 env[1321]: time="2025-10-29T00:24:29.800266680Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 00:24:29.800375 env[1321]: time="2025-10-29T00:24:29.800268480Z" level=info msg="Start cni network conf syncer for default" Oct 29 00:24:29.800375 env[1321]: time="2025-10-29T00:24:29.800373720Z" level=info msg="Start streaming server" Oct 29 00:24:29.800471 systemd[1]: Started containerd.service. Oct 29 00:24:29.802113 env[1321]: time="2025-10-29T00:24:29.802063920Z" level=info msg="containerd successfully booted in 0.058846s" Oct 29 00:24:30.085911 tar[1313]: linux-arm64/README.md Oct 29 00:24:30.090694 systemd[1]: Finished prepare-helm.service. Oct 29 00:24:30.550512 systemd-networkd[1101]: eth0: Gained IPv6LL Oct 29 00:24:30.553212 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 29 00:24:30.556154 systemd[1]: Reached target network-online.target. Oct 29 00:24:30.562360 systemd[1]: Starting kubelet.service... Oct 29 00:24:30.767353 sshd_keygen[1327]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 00:24:30.788786 systemd[1]: Finished sshd-keygen.service. Oct 29 00:24:30.791432 systemd[1]: Starting issuegen.service... Oct 29 00:24:30.797558 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 00:24:30.797841 systemd[1]: Finished issuegen.service. Oct 29 00:24:30.800449 systemd[1]: Starting systemd-user-sessions.service... Oct 29 00:24:30.810182 systemd[1]: Finished systemd-user-sessions.service. Oct 29 00:24:30.813102 systemd[1]: Started getty@tty1.service. Oct 29 00:24:30.815856 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 29 00:24:30.816949 systemd[1]: Reached target getty.target. Oct 29 00:24:31.223139 systemd[1]: Started kubelet.service. Oct 29 00:24:31.225377 systemd[1]: Reached target multi-user.target. Oct 29 00:24:31.228277 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 29 00:24:31.235466 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 29 00:24:31.235745 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 29 00:24:31.237212 systemd[1]: Startup finished in 5.220s (kernel) + 5.097s (userspace) = 10.317s. Oct 29 00:24:31.628595 kubelet[1398]: E1029 00:24:31.628499 1398 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:24:31.630356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:24:31.630529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:24:34.529771 systemd[1]: Created slice system-sshd.slice. Oct 29 00:24:34.530993 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:38022.service. Oct 29 00:24:34.581448 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 38022 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:24:34.584541 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:24:34.595022 systemd[1]: Created slice user-500.slice. Oct 29 00:24:34.596103 systemd[1]: Starting user-runtime-dir@500.service... Oct 29 00:24:34.597909 systemd-logind[1308]: New session 1 of user core. Oct 29 00:24:34.606681 systemd[1]: Finished user-runtime-dir@500.service. Oct 29 00:24:34.608157 systemd[1]: Starting user@500.service... Oct 29 00:24:34.612956 (systemd)[1413]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:24:34.688128 systemd[1413]: Queued start job for default target default.target. Oct 29 00:24:34.688830 systemd[1413]: Reached target paths.target. Oct 29 00:24:34.688857 systemd[1413]: Reached target sockets.target. Oct 29 00:24:34.688867 systemd[1413]: Reached target timers.target. Oct 29 00:24:34.688878 systemd[1413]: Reached target basic.target. Oct 29 00:24:34.688934 systemd[1413]: Reached target default.target. Oct 29 00:24:34.688963 systemd[1413]: Startup finished in 68ms. Oct 29 00:24:34.689037 systemd[1]: Started user@500.service. Oct 29 00:24:34.690111 systemd[1]: Started session-1.scope. Oct 29 00:24:34.744724 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:38026.service. Oct 29 00:24:34.792209 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 38026 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:24:34.793860 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:24:34.799546 systemd-logind[1308]: New session 2 of user core. Oct 29 00:24:34.801692 systemd[1]: Started session-2.scope. Oct 29 00:24:34.858753 sshd[1422]: pam_unix(sshd:session): session closed for user core Oct 29 00:24:34.861539 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:38038.service. Oct 29 00:24:34.862255 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:38026.service: Deactivated successfully. Oct 29 00:24:34.863537 systemd-logind[1308]: Session 2 logged out. Waiting for processes to exit. Oct 29 00:24:34.863621 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 00:24:34.865417 systemd-logind[1308]: Removed session 2. Oct 29 00:24:34.903036 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 38038 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:24:34.904437 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:24:34.909152 systemd-logind[1308]: New session 3 of user core. Oct 29 00:24:34.910046 systemd[1]: Started session-3.scope. Oct 29 00:24:34.961587 sshd[1428]: pam_unix(sshd:session): session closed for user core Oct 29 00:24:34.964666 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:38054.service. Oct 29 00:24:34.968047 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:38038.service: Deactivated successfully. Oct 29 00:24:34.969552 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 00:24:34.969554 systemd-logind[1308]: Session 3 logged out. Waiting for processes to exit. Oct 29 00:24:34.970908 systemd-logind[1308]: Removed session 3. Oct 29 00:24:35.007307 sshd[1434]: Accepted publickey for core from 10.0.0.1 port 38054 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:24:35.008633 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:24:35.012166 systemd-logind[1308]: New session 4 of user core. Oct 29 00:24:35.013004 systemd[1]: Started session-4.scope. Oct 29 00:24:35.074287 sshd[1434]: pam_unix(sshd:session): session closed for user core Oct 29 00:24:35.075765 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:38066.service. Oct 29 00:24:35.080797 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:38054.service: Deactivated successfully. Oct 29 00:24:35.081708 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 00:24:35.082320 systemd-logind[1308]: Session 4 logged out. Waiting for processes to exit. Oct 29 00:24:35.083535 systemd-logind[1308]: Removed session 4. Oct 29 00:24:35.119543 sshd[1441]: Accepted publickey for core from 10.0.0.1 port 38066 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:24:35.120827 sshd[1441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:24:35.127065 systemd[1]: Started session-5.scope. Oct 29 00:24:35.127162 systemd-logind[1308]: New session 5 of user core. Oct 29 00:24:35.189679 sudo[1447]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 00:24:35.189900 sudo[1447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 00:24:35.234392 systemd[1]: Starting docker.service... Oct 29 00:24:35.294528 env[1459]: time="2025-10-29T00:24:35.294458817Z" level=info msg="Starting up" Oct 29 00:24:35.295948 env[1459]: time="2025-10-29T00:24:35.295918367Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 00:24:35.295948 env[1459]: time="2025-10-29T00:24:35.295943095Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 00:24:35.296025 env[1459]: time="2025-10-29T00:24:35.295967430Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 00:24:35.296025 env[1459]: time="2025-10-29T00:24:35.295977949Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 00:24:35.298293 env[1459]: time="2025-10-29T00:24:35.298261279Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 00:24:35.298293 env[1459]: time="2025-10-29T00:24:35.298289264Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 00:24:35.298406 env[1459]: time="2025-10-29T00:24:35.298307986Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 00:24:35.298406 env[1459]: time="2025-10-29T00:24:35.298317053Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 00:24:35.305951 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2308855885-merged.mount: Deactivated successfully. Oct 29 00:24:35.514722 env[1459]: time="2025-10-29T00:24:35.511481655Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 29 00:24:35.514722 env[1459]: time="2025-10-29T00:24:35.514455432Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 29 00:24:35.514928 env[1459]: time="2025-10-29T00:24:35.514753222Z" level=info msg="Loading containers: start." Oct 29 00:24:35.677415 kernel: Initializing XFRM netlink socket Oct 29 00:24:35.706851 env[1459]: time="2025-10-29T00:24:35.706794376Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 29 00:24:35.773524 systemd-networkd[1101]: docker0: Link UP Oct 29 00:24:35.797916 env[1459]: time="2025-10-29T00:24:35.797859405Z" level=info msg="Loading containers: done." Oct 29 00:24:35.815982 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2459579336-merged.mount: Deactivated successfully. Oct 29 00:24:35.832220 env[1459]: time="2025-10-29T00:24:35.832174622Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 00:24:35.832586 env[1459]: time="2025-10-29T00:24:35.832566729Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 29 00:24:35.832765 env[1459]: time="2025-10-29T00:24:35.832749477Z" level=info msg="Daemon has completed initialization" Oct 29 00:24:35.855988 systemd[1]: Started docker.service. Oct 29 00:24:35.865224 env[1459]: time="2025-10-29T00:24:35.865163817Z" level=info msg="API listen on /run/docker.sock" Oct 29 00:24:36.561471 env[1321]: time="2025-10-29T00:24:36.561413450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 29 00:24:37.167526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601593228.mount: Deactivated successfully. Oct 29 00:24:38.493547 env[1321]: time="2025-10-29T00:24:38.493428238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:38.495167 env[1321]: time="2025-10-29T00:24:38.495127146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:38.496998 env[1321]: time="2025-10-29T00:24:38.496966153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:38.498865 env[1321]: time="2025-10-29T00:24:38.498826093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:38.499835 env[1321]: time="2025-10-29T00:24:38.499801998Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 29 00:24:38.501316 env[1321]: time="2025-10-29T00:24:38.501262500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 29 00:24:39.934369 env[1321]: time="2025-10-29T00:24:39.934299929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:39.936554 env[1321]: time="2025-10-29T00:24:39.936516457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:39.938795 env[1321]: time="2025-10-29T00:24:39.938765306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:39.940446 env[1321]: time="2025-10-29T00:24:39.940418256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:39.941385 env[1321]: time="2025-10-29T00:24:39.941351802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 29 00:24:39.942478 env[1321]: time="2025-10-29T00:24:39.942446833Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 29 00:24:41.127797 env[1321]: time="2025-10-29T00:24:41.127746849Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:41.134141 env[1321]: time="2025-10-29T00:24:41.134059994Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:41.137419 env[1321]: time="2025-10-29T00:24:41.137371730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:41.143253 env[1321]: time="2025-10-29T00:24:41.143219543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:41.144121 env[1321]: time="2025-10-29T00:24:41.144072901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 29 00:24:41.144801 env[1321]: time="2025-10-29T00:24:41.144777561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 29 00:24:41.881315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 00:24:41.881527 systemd[1]: Stopped kubelet.service. Oct 29 00:24:41.882963 systemd[1]: Starting kubelet.service... Oct 29 00:24:41.988497 systemd[1]: Started kubelet.service. Oct 29 00:24:42.046808 kubelet[1596]: E1029 00:24:42.046764 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:24:42.049243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:24:42.049413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:24:42.225953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724868221.mount: Deactivated successfully. Oct 29 00:24:42.832606 env[1321]: time="2025-10-29T00:24:42.832556229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:42.836098 env[1321]: time="2025-10-29T00:24:42.836040106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:42.837886 env[1321]: time="2025-10-29T00:24:42.837837057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:42.839232 env[1321]: time="2025-10-29T00:24:42.839193873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:42.839753 env[1321]: time="2025-10-29T00:24:42.839725212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 29 00:24:42.840501 env[1321]: time="2025-10-29T00:24:42.840369632Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 29 00:24:43.345907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879511333.mount: Deactivated successfully. Oct 29 00:24:44.451968 env[1321]: time="2025-10-29T00:24:44.451912871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.454768 env[1321]: time="2025-10-29T00:24:44.454734713Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.458536 env[1321]: time="2025-10-29T00:24:44.458507723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.462118 env[1321]: time="2025-10-29T00:24:44.462069013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.463076 env[1321]: time="2025-10-29T00:24:44.463051165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 29 00:24:44.463543 env[1321]: time="2025-10-29T00:24:44.463521419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 00:24:44.898823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344661376.mount: Deactivated successfully. Oct 29 00:24:44.903019 env[1321]: time="2025-10-29T00:24:44.902963341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.905885 env[1321]: time="2025-10-29T00:24:44.905843493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.907980 env[1321]: time="2025-10-29T00:24:44.907949394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.909476 env[1321]: time="2025-10-29T00:24:44.909448097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:44.910027 env[1321]: time="2025-10-29T00:24:44.909977774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 29 00:24:44.910583 env[1321]: time="2025-10-29T00:24:44.910559715Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 29 00:24:45.378845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851071986.mount: Deactivated successfully. Oct 29 00:24:47.434844 env[1321]: time="2025-10-29T00:24:47.434795578Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:47.436629 env[1321]: time="2025-10-29T00:24:47.436599739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:47.438579 env[1321]: time="2025-10-29T00:24:47.438556800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:47.440551 env[1321]: time="2025-10-29T00:24:47.440523743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:47.442412 env[1321]: time="2025-10-29T00:24:47.442373172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 29 00:24:52.056991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 00:24:52.057182 systemd[1]: Stopped kubelet.service. Oct 29 00:24:52.058856 systemd[1]: Starting kubelet.service... Oct 29 00:24:52.164802 systemd[1]: Started kubelet.service. Oct 29 00:24:52.204250 kubelet[1634]: E1029 00:24:52.204204 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:24:52.207231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:24:52.207384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:24:53.202560 systemd[1]: Stopped kubelet.service. Oct 29 00:24:53.205334 systemd[1]: Starting kubelet.service... Oct 29 00:24:53.234431 systemd[1]: Reloading. Oct 29 00:24:53.293594 /usr/lib/systemd/system-generators/torcx-generator[1672]: time="2025-10-29T00:24:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 00:24:53.293959 /usr/lib/systemd/system-generators/torcx-generator[1672]: time="2025-10-29T00:24:53Z" level=info msg="torcx already run" Oct 29 00:24:53.527101 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:24:53.527124 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:24:53.553871 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:24:53.626108 systemd[1]: Started kubelet.service. Oct 29 00:24:53.629097 systemd[1]: Stopping kubelet.service... Oct 29 00:24:53.629776 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:24:53.630046 systemd[1]: Stopped kubelet.service. Oct 29 00:24:53.632388 systemd[1]: Starting kubelet.service... Oct 29 00:24:53.737981 systemd[1]: Started kubelet.service. Oct 29 00:24:53.777389 kubelet[1729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:24:53.777389 kubelet[1729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:24:53.777389 kubelet[1729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:24:53.777389 kubelet[1729]: I1029 00:24:53.777352 1729 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:24:54.429175 kubelet[1729]: I1029 00:24:54.429116 1729 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 00:24:54.429175 kubelet[1729]: I1029 00:24:54.429156 1729 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:24:54.429479 kubelet[1729]: I1029 00:24:54.429452 1729 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 00:24:54.458349 kubelet[1729]: E1029 00:24:54.458298 1729 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:54.459444 kubelet[1729]: I1029 00:24:54.459360 1729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:24:54.466735 kubelet[1729]: E1029 00:24:54.466689 1729 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 00:24:54.466735 kubelet[1729]: I1029 00:24:54.466725 1729 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 00:24:54.470273 kubelet[1729]: I1029 00:24:54.470242 1729 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:24:54.470916 kubelet[1729]: I1029 00:24:54.470876 1729 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:24:54.471161 kubelet[1729]: I1029 00:24:54.470986 1729 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 29 00:24:54.471385 kubelet[1729]: I1029 00:24:54.471368 1729 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:24:54.471472 kubelet[1729]: I1029 00:24:54.471461 1729 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 00:24:54.471729 kubelet[1729]: I1029 00:24:54.471716 1729 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:24:54.474613 kubelet[1729]: I1029 00:24:54.474588 1729 kubelet.go:446] "Attempting to sync node with API server" Oct 29 00:24:54.474751 kubelet[1729]: I1029 00:24:54.474736 1729 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:24:54.474892 kubelet[1729]: I1029 00:24:54.474877 1729 kubelet.go:352] "Adding apiserver pod source" Oct 29 00:24:54.475080 kubelet[1729]: I1029 00:24:54.475069 1729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:24:54.477333 kubelet[1729]: W1029 00:24:54.477283 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:54.477523 kubelet[1729]: E1029 00:24:54.477500 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:54.477639 kubelet[1729]: W1029 00:24:54.477594 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:54.477714 kubelet[1729]: E1029 00:24:54.477693 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:54.480563 kubelet[1729]: I1029 00:24:54.480543 1729 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 00:24:54.481331 kubelet[1729]: I1029 00:24:54.481308 1729 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 00:24:54.481591 kubelet[1729]: W1029 00:24:54.481576 1729 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 00:24:54.482607 kubelet[1729]: I1029 00:24:54.482583 1729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:24:54.482773 kubelet[1729]: I1029 00:24:54.482758 1729 server.go:1287] "Started kubelet" Oct 29 00:24:54.485207 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Oct 29 00:24:54.485434 kubelet[1729]: I1029 00:24:54.485357 1729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:24:54.488352 kubelet[1729]: I1029 00:24:54.488249 1729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:24:54.488710 kubelet[1729]: I1029 00:24:54.488686 1729 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:24:54.488837 kubelet[1729]: I1029 00:24:54.488775 1729 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:24:54.489722 kubelet[1729]: I1029 00:24:54.489698 1729 server.go:479] "Adding debug handlers to kubelet server" Oct 29 00:24:54.490684 kubelet[1729]: I1029 00:24:54.490662 1729 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:24:54.490794 kubelet[1729]: E1029 00:24:54.490767 1729 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:24:54.490846 kubelet[1729]: I1029 00:24:54.490824 1729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:24:54.490942 kubelet[1729]: E1029 00:24:54.490873 1729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:24:54.491792 kubelet[1729]: E1029 00:24:54.491747 1729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Oct 29 00:24:54.491893 kubelet[1729]: I1029 00:24:54.491849 1729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:24:54.491924 kubelet[1729]: I1029 00:24:54.491902 1729 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:24:54.492354 kubelet[1729]: I1029 00:24:54.492310 1729 factory.go:221] Registration of the systemd container factory successfully Oct 29 00:24:54.492456 kubelet[1729]: I1029 00:24:54.492436 1729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:24:54.492532 kubelet[1729]: E1029 00:24:54.492108 1729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872ce8990569e06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 00:24:54.48272231 +0000 UTC m=+0.740544432,LastTimestamp:2025-10-29 00:24:54.48272231 +0000 UTC m=+0.740544432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 00:24:54.492782 kubelet[1729]: W1029 00:24:54.492737 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:54.492894 kubelet[1729]: E1029 00:24:54.492874 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:54.493959 kubelet[1729]: I1029 00:24:54.493934 1729 factory.go:221] Registration of the containerd container factory successfully Oct 29 00:24:54.502725 kubelet[1729]: I1029 00:24:54.502664 1729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 00:24:54.503708 kubelet[1729]: I1029 00:24:54.503673 1729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 00:24:54.503708 kubelet[1729]: I1029 00:24:54.503698 1729 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 00:24:54.503812 kubelet[1729]: I1029 00:24:54.503718 1729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:24:54.503812 kubelet[1729]: I1029 00:24:54.503726 1729 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 00:24:54.503812 kubelet[1729]: E1029 00:24:54.503770 1729 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:24:54.507996 kubelet[1729]: W1029 00:24:54.507929 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:54.508127 kubelet[1729]: E1029 00:24:54.508010 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:54.515948 kubelet[1729]: I1029 00:24:54.515896 1729 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:24:54.515948 kubelet[1729]: I1029 00:24:54.515934 1729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:24:54.515948 kubelet[1729]: I1029 00:24:54.515958 1729 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:24:54.591064 kubelet[1729]: E1029 00:24:54.591011 1729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:24:54.604913 kubelet[1729]: E1029 00:24:54.604246 1729 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 00:24:54.691595 kubelet[1729]: E1029 00:24:54.691456 1729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:24:54.692972 kubelet[1729]: E1029 00:24:54.692874 1729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Oct 29 00:24:54.760528 kubelet[1729]: I1029 00:24:54.760482 1729 policy_none.go:49] "None policy: Start" Oct 29 00:24:54.760528 kubelet[1729]: I1029 00:24:54.760530 1729 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:24:54.760528 kubelet[1729]: I1029 00:24:54.760543 1729 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:24:54.774845 kubelet[1729]: I1029 00:24:54.774520 1729 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 00:24:54.774845 kubelet[1729]: I1029 00:24:54.774676 1729 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:24:54.774845 kubelet[1729]: I1029 00:24:54.774688 1729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:24:54.776796 kubelet[1729]: I1029 00:24:54.776773 1729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:24:54.778182 kubelet[1729]: E1029 00:24:54.778156 1729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:24:54.778613 kubelet[1729]: E1029 00:24:54.778213 1729 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 00:24:54.809988 kubelet[1729]: E1029 00:24:54.809940 1729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:24:54.815788 kubelet[1729]: E1029 00:24:54.815755 1729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:24:54.816820 kubelet[1729]: E1029 00:24:54.816800 1729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:24:54.876735 kubelet[1729]: I1029 00:24:54.876708 1729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:24:54.877565 kubelet[1729]: E1029 00:24:54.877503 1729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Oct 29 00:24:54.993299 kubelet[1729]: I1029 00:24:54.993162 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02c536a943ed5dd4413ef0ba2a284648-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"02c536a943ed5dd4413ef0ba2a284648\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:54.993299 kubelet[1729]: I1029 00:24:54.993205 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02c536a943ed5dd4413ef0ba2a284648-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"02c536a943ed5dd4413ef0ba2a284648\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:54.993299 kubelet[1729]: I1029 00:24:54.993227 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02c536a943ed5dd4413ef0ba2a284648-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"02c536a943ed5dd4413ef0ba2a284648\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:54.993509 kubelet[1729]: I1029 00:24:54.993303 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:54.993509 kubelet[1729]: I1029 00:24:54.993339 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:54.993509 kubelet[1729]: I1029 00:24:54.993359 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:54.993509 kubelet[1729]: I1029 00:24:54.993376 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:54.993509 kubelet[1729]: I1029 00:24:54.993393 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:54.993638 kubelet[1729]: I1029 00:24:54.993437 1729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:24:55.079388 kubelet[1729]: I1029 00:24:55.079343 1729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:24:55.079837 kubelet[1729]: E1029 00:24:55.079807 1729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Oct 29 00:24:55.093569 kubelet[1729]: E1029 00:24:55.093515 1729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Oct 29 00:24:55.111039 kubelet[1729]: E1029 00:24:55.110982 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:55.111799 env[1321]: time="2025-10-29T00:24:55.111710774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:02c536a943ed5dd4413ef0ba2a284648,Namespace:kube-system,Attempt:0,}" Oct 29 00:24:55.117563 kubelet[1729]: E1029 00:24:55.116926 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:55.117694 env[1321]: time="2025-10-29T00:24:55.117423991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 29 00:24:55.117899 kubelet[1729]: E1029 00:24:55.117879 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:55.118567 env[1321]: time="2025-10-29T00:24:55.118517610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 29 00:24:55.323750 kubelet[1729]: W1029 00:24:55.323682 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:55.323919 kubelet[1729]: E1029 00:24:55.323754 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:55.482318 kubelet[1729]: I1029 00:24:55.481956 1729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:24:55.482318 kubelet[1729]: E1029 00:24:55.482304 1729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Oct 29 00:24:55.523485 kubelet[1729]: W1029 00:24:55.523436 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:55.523485 kubelet[1729]: E1029 00:24:55.523485 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:55.842858 kubelet[1729]: W1029 00:24:55.842790 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:55.843203 kubelet[1729]: E1029 00:24:55.842862 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:55.894967 kubelet[1729]: E1029 00:24:55.894931 1729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="1.6s" Oct 29 00:24:55.957735 kubelet[1729]: W1029 00:24:55.957657 1729 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Oct 29 00:24:55.957735 kubelet[1729]: E1029 00:24:55.957734 1729 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:56.009682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266910392.mount: Deactivated successfully. Oct 29 00:24:56.089538 env[1321]: time="2025-10-29T00:24:56.089483989Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.092726 env[1321]: time="2025-10-29T00:24:56.092678517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.108482 env[1321]: time="2025-10-29T00:24:56.108353419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.149241 env[1321]: time="2025-10-29T00:24:56.149190677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.241165 env[1321]: time="2025-10-29T00:24:56.241105714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.268348 env[1321]: time="2025-10-29T00:24:56.268298363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.269986 env[1321]: time="2025-10-29T00:24:56.269898144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.272096 env[1321]: time="2025-10-29T00:24:56.272046342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.275504 env[1321]: time="2025-10-29T00:24:56.275465935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.277351 env[1321]: time="2025-10-29T00:24:56.277318309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.278940 env[1321]: time="2025-10-29T00:24:56.278905864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.283071 env[1321]: time="2025-10-29T00:24:56.281748273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:24:56.284774 kubelet[1729]: I1029 00:24:56.284468 1729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:24:56.286163 kubelet[1729]: E1029 00:24:56.286127 1729 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Oct 29 00:24:56.345169 env[1321]: time="2025-10-29T00:24:56.345055629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:24:56.345169 env[1321]: time="2025-10-29T00:24:56.345117998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:24:56.345169 env[1321]: time="2025-10-29T00:24:56.345129905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:24:56.345553 env[1321]: time="2025-10-29T00:24:56.345509753Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/155c64150ad6a62344b4e351071de795c788cc64ca6def2b0d7e947cf3e538b2 pid=1770 runtime=io.containerd.runc.v2 Oct 29 00:24:56.364788 env[1321]: time="2025-10-29T00:24:56.362212886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:24:56.364788 env[1321]: time="2025-10-29T00:24:56.362300546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:24:56.364788 env[1321]: time="2025-10-29T00:24:56.362328275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:24:56.364788 env[1321]: time="2025-10-29T00:24:56.362509708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7bd21cf68e5ae02b888db73b45ef887aeb01219dd9bd1b4ea02205a626b383d pid=1796 runtime=io.containerd.runc.v2 Oct 29 00:24:56.370214 env[1321]: time="2025-10-29T00:24:56.370062483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:24:56.370214 env[1321]: time="2025-10-29T00:24:56.370172438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:24:56.370500 env[1321]: time="2025-10-29T00:24:56.370390430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:24:56.370905 env[1321]: time="2025-10-29T00:24:56.370816346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b32759ea060e164ff5cb8b7512bf3f4f5557547a15f0090acab53f4974d0ddf7 pid=1814 runtime=io.containerd.runc.v2 Oct 29 00:24:56.412270 env[1321]: time="2025-10-29T00:24:56.412227072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:02c536a943ed5dd4413ef0ba2a284648,Namespace:kube-system,Attempt:0,} returns sandbox id \"155c64150ad6a62344b4e351071de795c788cc64ca6def2b0d7e947cf3e538b2\"" Oct 29 00:24:56.413584 kubelet[1729]: E1029 00:24:56.413357 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:56.415344 env[1321]: time="2025-10-29T00:24:56.415297902Z" level=info msg="CreateContainer within sandbox \"155c64150ad6a62344b4e351071de795c788cc64ca6def2b0d7e947cf3e538b2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 00:24:56.432556 env[1321]: time="2025-10-29T00:24:56.432505501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"b32759ea060e164ff5cb8b7512bf3f4f5557547a15f0090acab53f4974d0ddf7\"" Oct 29 00:24:56.432710 env[1321]: time="2025-10-29T00:24:56.432502385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7bd21cf68e5ae02b888db73b45ef887aeb01219dd9bd1b4ea02205a626b383d\"" Oct 29 00:24:56.433576 kubelet[1729]: E1029 00:24:56.433552 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:56.433812 kubelet[1729]: E1029 00:24:56.433788 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:56.435363 env[1321]: time="2025-10-29T00:24:56.435318903Z" level=info msg="CreateContainer within sandbox \"b32759ea060e164ff5cb8b7512bf3f4f5557547a15f0090acab53f4974d0ddf7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 00:24:56.435883 env[1321]: time="2025-10-29T00:24:56.435820453Z" level=info msg="CreateContainer within sandbox \"a7bd21cf68e5ae02b888db73b45ef887aeb01219dd9bd1b4ea02205a626b383d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 00:24:56.438194 env[1321]: time="2025-10-29T00:24:56.438144131Z" level=info msg="CreateContainer within sandbox \"155c64150ad6a62344b4e351071de795c788cc64ca6def2b0d7e947cf3e538b2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e248989a18c24d8efb301e1efb44a6fc4709b0f6efec84723d5b95dc956aee8\"" Oct 29 00:24:56.439278 env[1321]: time="2025-10-29T00:24:56.439056894Z" level=info msg="StartContainer for \"6e248989a18c24d8efb301e1efb44a6fc4709b0f6efec84723d5b95dc956aee8\"" Oct 29 00:24:56.458233 env[1321]: time="2025-10-29T00:24:56.458174961Z" level=info msg="CreateContainer within sandbox \"b32759ea060e164ff5cb8b7512bf3f4f5557547a15f0090acab53f4974d0ddf7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61341b58545aa3d64d2d98bc68498910645416d4f7f04fb4f58713b9aa81031a\"" Oct 29 00:24:56.458932 env[1321]: time="2025-10-29T00:24:56.458899698Z" level=info msg="StartContainer for \"61341b58545aa3d64d2d98bc68498910645416d4f7f04fb4f58713b9aa81031a\"" Oct 29 00:24:56.459132 env[1321]: time="2025-10-29T00:24:56.458924310Z" level=info msg="CreateContainer within sandbox \"a7bd21cf68e5ae02b888db73b45ef887aeb01219dd9bd1b4ea02205a626b383d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"efe2b74229f896ccebb5dceae7625a316e20eb97dc3af966c4b1625df4106d0c\"" Oct 29 00:24:56.459512 env[1321]: time="2025-10-29T00:24:56.459480118Z" level=info msg="StartContainer for \"efe2b74229f896ccebb5dceae7625a316e20eb97dc3af966c4b1625df4106d0c\"" Oct 29 00:24:56.542422 env[1321]: time="2025-10-29T00:24:56.540353306Z" level=info msg="StartContainer for \"61341b58545aa3d64d2d98bc68498910645416d4f7f04fb4f58713b9aa81031a\" returns successfully" Oct 29 00:24:56.545405 env[1321]: time="2025-10-29T00:24:56.545336961Z" level=info msg="StartContainer for \"6e248989a18c24d8efb301e1efb44a6fc4709b0f6efec84723d5b95dc956aee8\" returns successfully" Oct 29 00:24:56.575728 env[1321]: time="2025-10-29T00:24:56.575673516Z" level=info msg="StartContainer for \"efe2b74229f896ccebb5dceae7625a316e20eb97dc3af966c4b1625df4106d0c\" returns successfully" Oct 29 00:24:56.577539 kubelet[1729]: E1029 00:24:56.577492 1729 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Oct 29 00:24:57.524818 kubelet[1729]: E1029 00:24:57.524364 1729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:24:57.524818 kubelet[1729]: E1029 00:24:57.524669 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:57.528031 kubelet[1729]: E1029 00:24:57.527650 1729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:24:57.528031 kubelet[1729]: E1029 00:24:57.527887 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:57.531227 kubelet[1729]: E1029 00:24:57.531191 1729 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:24:57.531930 kubelet[1729]: E1029 00:24:57.531904 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:57.887959 kubelet[1729]: I1029 00:24:57.887921 1729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:24:58.109989 kubelet[1729]: E1029 00:24:58.109954 1729 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 00:24:58.271234 kubelet[1729]: I1029 00:24:58.270898 1729 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:24:58.273281 kubelet[1729]: E1029 00:24:58.273179 1729 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 00:24:58.294614 kubelet[1729]: I1029 00:24:58.292193 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:58.305431 kubelet[1729]: E1029 00:24:58.305381 1729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:58.305610 kubelet[1729]: I1029 00:24:58.305597 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:58.308565 kubelet[1729]: E1029 00:24:58.308524 1729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:58.308565 kubelet[1729]: I1029 00:24:58.308558 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:24:58.312990 kubelet[1729]: E1029 00:24:58.312952 1729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:24:58.477235 kubelet[1729]: I1029 00:24:58.477193 1729 apiserver.go:52] "Watching apiserver" Oct 29 00:24:58.492712 kubelet[1729]: I1029 00:24:58.492664 1729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:24:58.532143 kubelet[1729]: I1029 00:24:58.532041 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:58.532611 kubelet[1729]: I1029 00:24:58.532452 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:24:58.533053 kubelet[1729]: I1029 00:24:58.532745 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:58.536053 kubelet[1729]: E1029 00:24:58.536020 1729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:58.536448 kubelet[1729]: E1029 00:24:58.536430 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:58.538417 kubelet[1729]: E1029 00:24:58.538376 1729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:24:58.538576 kubelet[1729]: E1029 00:24:58.538553 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:58.540316 kubelet[1729]: E1029 00:24:58.540287 1729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:24:58.540467 kubelet[1729]: E1029 00:24:58.540451 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:59.533424 kubelet[1729]: I1029 00:24:59.533342 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:24:59.533776 kubelet[1729]: I1029 00:24:59.533722 1729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:24:59.540747 kubelet[1729]: E1029 00:24:59.540700 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:24:59.541338 kubelet[1729]: E1029 00:24:59.541308 1729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:00.192721 systemd[1]: Reloading. Oct 29 00:25:00.252292 /usr/lib/systemd/system-generators/torcx-generator[2026]: time="2025-10-29T00:25:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 00:25:00.256371 /usr/lib/systemd/system-generators/torcx-generator[2026]: time="2025-10-29T00:25:00Z" level=info msg="torcx already run" Oct 29 00:25:00.331429 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:25:00.331453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:25:00.351851 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:25:00.424076 systemd[1]: Stopping kubelet.service... Oct 29 00:25:00.446852 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:25:00.447211 systemd[1]: Stopped kubelet.service. Oct 29 00:25:00.449089 systemd[1]: Starting kubelet.service... Oct 29 00:25:00.571073 systemd[1]: Started kubelet.service. Oct 29 00:25:00.624393 kubelet[2077]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:25:00.624804 kubelet[2077]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:25:00.624884 kubelet[2077]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:25:00.625082 kubelet[2077]: I1029 00:25:00.625027 2077 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:25:00.637308 kubelet[2077]: I1029 00:25:00.637261 2077 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 00:25:00.637496 kubelet[2077]: I1029 00:25:00.637482 2077 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:25:00.637917 kubelet[2077]: I1029 00:25:00.637891 2077 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 00:25:00.640271 kubelet[2077]: I1029 00:25:00.640237 2077 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 29 00:25:00.643248 kubelet[2077]: I1029 00:25:00.643195 2077 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:25:00.647230 kubelet[2077]: E1029 00:25:00.647194 2077 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 00:25:00.647438 kubelet[2077]: I1029 00:25:00.647387 2077 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 00:25:00.650951 kubelet[2077]: I1029 00:25:00.650915 2077 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 00:25:00.651702 kubelet[2077]: I1029 00:25:00.651659 2077 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:25:00.652179 kubelet[2077]: I1029 00:25:00.651786 2077 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 29 00:25:00.652312 kubelet[2077]: I1029 00:25:00.652195 2077 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:25:00.652312 kubelet[2077]: I1029 00:25:00.652209 2077 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 00:25:00.652312 kubelet[2077]: I1029 00:25:00.652266 2077 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:25:00.652461 kubelet[2077]: I1029 00:25:00.652443 2077 kubelet.go:446] "Attempting to sync node with API server" Oct 29 00:25:00.652540 kubelet[2077]: I1029 00:25:00.652529 2077 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:25:00.652618 kubelet[2077]: I1029 00:25:00.652607 2077 kubelet.go:352] "Adding apiserver pod source" Oct 29 00:25:00.652686 kubelet[2077]: I1029 00:25:00.652675 2077 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:25:00.654366 kubelet[2077]: I1029 00:25:00.654324 2077 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 00:25:00.655751 kubelet[2077]: I1029 00:25:00.655716 2077 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 00:25:00.656469 kubelet[2077]: I1029 00:25:00.656444 2077 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 00:25:00.656523 kubelet[2077]: I1029 00:25:00.656497 2077 server.go:1287] "Started kubelet" Oct 29 00:25:00.657314 kubelet[2077]: I1029 00:25:00.657257 2077 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:25:00.657593 kubelet[2077]: I1029 00:25:00.657570 2077 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:25:00.657663 kubelet[2077]: I1029 00:25:00.657639 2077 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:25:00.658852 kubelet[2077]: I1029 00:25:00.658813 2077 server.go:479] "Adding debug handlers to kubelet server" Oct 29 00:25:00.659283 kubelet[2077]: I1029 00:25:00.659249 2077 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:25:00.664609 kubelet[2077]: I1029 00:25:00.662072 2077 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 00:25:00.664609 kubelet[2077]: I1029 00:25:00.662638 2077 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 00:25:00.668769 kubelet[2077]: I1029 00:25:00.668743 2077 reconciler.go:26] "Reconciler: start to sync state" Oct 29 00:25:00.669710 kubelet[2077]: I1029 00:25:00.669684 2077 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:25:00.671173 kubelet[2077]: E1029 00:25:00.671137 2077 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:25:00.680857 kubelet[2077]: E1029 00:25:00.680804 2077 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:25:00.682764 kubelet[2077]: I1029 00:25:00.682741 2077 factory.go:221] Registration of the systemd container factory successfully Oct 29 00:25:00.682930 kubelet[2077]: I1029 00:25:00.682908 2077 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:25:00.683948 kubelet[2077]: I1029 00:25:00.683922 2077 factory.go:221] Registration of the containerd container factory successfully Oct 29 00:25:00.690759 kubelet[2077]: I1029 00:25:00.690718 2077 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 00:25:00.691850 kubelet[2077]: I1029 00:25:00.691828 2077 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 00:25:00.691972 kubelet[2077]: I1029 00:25:00.691960 2077 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 00:25:00.692062 kubelet[2077]: I1029 00:25:00.692050 2077 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:25:00.692125 kubelet[2077]: I1029 00:25:00.692115 2077 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 00:25:00.692229 kubelet[2077]: E1029 00:25:00.692212 2077 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:25:00.732073 kubelet[2077]: I1029 00:25:00.730992 2077 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:25:00.732073 kubelet[2077]: I1029 00:25:00.731019 2077 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:25:00.732073 kubelet[2077]: I1029 00:25:00.731044 2077 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:25:00.732073 kubelet[2077]: I1029 00:25:00.731262 2077 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 00:25:00.732262 kubelet[2077]: I1029 00:25:00.731838 2077 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 00:25:00.732262 kubelet[2077]: I1029 00:25:00.732139 2077 policy_none.go:49] "None policy: Start" Oct 29 00:25:00.732262 kubelet[2077]: I1029 00:25:00.732154 2077 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 00:25:00.732262 kubelet[2077]: I1029 00:25:00.732171 2077 state_mem.go:35] "Initializing new in-memory state store" Oct 29 00:25:00.732664 kubelet[2077]: I1029 00:25:00.732627 2077 state_mem.go:75] "Updated machine memory state" Oct 29 00:25:00.734008 kubelet[2077]: I1029 00:25:00.733985 2077 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 00:25:00.734189 kubelet[2077]: I1029 00:25:00.734166 2077 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:25:00.734238 kubelet[2077]: I1029 00:25:00.734184 2077 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:25:00.734785 kubelet[2077]: I1029 00:25:00.734762 2077 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:25:00.735359 kubelet[2077]: E1029 00:25:00.735246 2077 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:25:00.793361 kubelet[2077]: I1029 00:25:00.793314 2077 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:25:00.793555 kubelet[2077]: I1029 00:25:00.793526 2077 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:25:00.793639 kubelet[2077]: I1029 00:25:00.793466 2077 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:25:00.805198 kubelet[2077]: E1029 00:25:00.805162 2077 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 00:25:00.805483 kubelet[2077]: E1029 00:25:00.805462 2077 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 00:25:00.837725 kubelet[2077]: I1029 00:25:00.837695 2077 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:25:00.847322 kubelet[2077]: I1029 00:25:00.847274 2077 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 00:25:00.847461 kubelet[2077]: I1029 00:25:00.847380 2077 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:25:00.870048 kubelet[2077]: I1029 00:25:00.870006 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:25:00.870227 kubelet[2077]: I1029 00:25:00.870206 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:25:00.870340 kubelet[2077]: I1029 00:25:00.870323 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02c536a943ed5dd4413ef0ba2a284648-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"02c536a943ed5dd4413ef0ba2a284648\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:25:00.870461 kubelet[2077]: I1029 00:25:00.870439 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02c536a943ed5dd4413ef0ba2a284648-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"02c536a943ed5dd4413ef0ba2a284648\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:25:00.870562 kubelet[2077]: I1029 00:25:00.870550 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:25:00.870659 kubelet[2077]: I1029 00:25:00.870647 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:25:00.870760 kubelet[2077]: I1029 00:25:00.870747 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02c536a943ed5dd4413ef0ba2a284648-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"02c536a943ed5dd4413ef0ba2a284648\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:25:00.870860 kubelet[2077]: I1029 00:25:00.870845 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:25:00.870961 kubelet[2077]: I1029 00:25:00.870948 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:25:01.105039 kubelet[2077]: E1029 00:25:01.105000 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:01.105881 kubelet[2077]: E1029 00:25:01.105665 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:01.105881 kubelet[2077]: E1029 00:25:01.105803 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:01.210126 sudo[2111]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 29 00:25:01.210886 sudo[2111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 29 00:25:01.653819 kubelet[2077]: I1029 00:25:01.653766 2077 apiserver.go:52] "Watching apiserver" Oct 29 00:25:01.662907 kubelet[2077]: I1029 00:25:01.662858 2077 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 00:25:01.710142 kubelet[2077]: E1029 00:25:01.710091 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:01.711791 kubelet[2077]: I1029 00:25:01.711765 2077 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:25:01.715676 kubelet[2077]: E1029 00:25:01.715655 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:01.731226 sudo[2111]: pam_unix(sudo:session): session closed for user root Oct 29 00:25:01.806973 kubelet[2077]: E1029 00:25:01.806872 2077 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 00:25:01.807128 kubelet[2077]: E1029 00:25:01.807086 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:01.830144 kubelet[2077]: I1029 00:25:01.829260 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.829213436 podStartE2EDuration="2.829213436s" podCreationTimestamp="2025-10-29 00:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:25:01.829198884 +0000 UTC m=+1.249470745" watchObservedRunningTime="2025-10-29 00:25:01.829213436 +0000 UTC m=+1.249485257" Oct 29 00:25:01.859099 kubelet[2077]: I1029 00:25:01.859011 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.858988451 podStartE2EDuration="1.858988451s" podCreationTimestamp="2025-10-29 00:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:25:01.850518095 +0000 UTC m=+1.270789916" watchObservedRunningTime="2025-10-29 00:25:01.858988451 +0000 UTC m=+1.279260272" Oct 29 00:25:02.711394 kubelet[2077]: E1029 00:25:02.711360 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:02.711930 kubelet[2077]: E1029 00:25:02.711459 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:03.713238 kubelet[2077]: E1029 00:25:03.713209 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:04.522168 sudo[1447]: pam_unix(sudo:session): session closed for user root Oct 29 00:25:04.524739 sshd[1441]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:04.530517 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:38066.service: Deactivated successfully. Oct 29 00:25:04.531696 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 00:25:04.532262 systemd-logind[1308]: Session 5 logged out. Waiting for processes to exit. Oct 29 00:25:04.533067 systemd-logind[1308]: Removed session 5. Oct 29 00:25:06.793878 kubelet[2077]: I1029 00:25:06.793850 2077 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 00:25:06.794688 env[1321]: time="2025-10-29T00:25:06.794637627Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 00:25:06.795114 kubelet[2077]: I1029 00:25:06.795096 2077 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 00:25:07.750214 kubelet[2077]: I1029 00:25:07.749895 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.749855743 podStartE2EDuration="8.749855743s" podCreationTimestamp="2025-10-29 00:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:25:01.85998657 +0000 UTC m=+1.280258471" watchObservedRunningTime="2025-10-29 00:25:07.749855743 +0000 UTC m=+7.170127564" Oct 29 00:25:07.819097 kubelet[2077]: I1029 00:25:07.818953 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-etc-cni-netd\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819097 kubelet[2077]: I1029 00:25:07.819058 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-kernel\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819549 kubelet[2077]: I1029 00:25:07.819106 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-hostproc\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819549 kubelet[2077]: I1029 00:25:07.819134 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1738f709-b496-4870-834f-ea4a9dbbd778-clustermesh-secrets\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819549 kubelet[2077]: I1029 00:25:07.819157 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsbt\" (UniqueName: \"kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-kube-api-access-jvsbt\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819549 kubelet[2077]: I1029 00:25:07.819175 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vbg\" (UniqueName: \"kubernetes.io/projected/181f4f04-c0d6-44e2-b245-8c16cd1315f5-kube-api-access-h5vbg\") pod \"cilium-operator-6c4d7847fc-g6b56\" (UID: \"181f4f04-c0d6-44e2-b245-8c16cd1315f5\") " pod="kube-system/cilium-operator-6c4d7847fc-g6b56" Oct 29 00:25:07.819549 kubelet[2077]: I1029 00:25:07.819200 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cni-path\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819670 kubelet[2077]: I1029 00:25:07.819220 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-lib-modules\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819670 kubelet[2077]: I1029 00:25:07.819238 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-hubble-tls\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819670 kubelet[2077]: I1029 00:25:07.819259 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpql\" (UniqueName: \"kubernetes.io/projected/cb3dbb86-1e26-4395-aa42-2757d6fd2fde-kube-api-access-xkpql\") pod \"kube-proxy-8g4x4\" (UID: \"cb3dbb86-1e26-4395-aa42-2757d6fd2fde\") " pod="kube-system/kube-proxy-8g4x4" Oct 29 00:25:07.819670 kubelet[2077]: I1029 00:25:07.819277 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-cgroup\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819670 kubelet[2077]: I1029 00:25:07.819298 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-config-path\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819670 kubelet[2077]: I1029 00:25:07.819318 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb3dbb86-1e26-4395-aa42-2757d6fd2fde-kube-proxy\") pod \"kube-proxy-8g4x4\" (UID: \"cb3dbb86-1e26-4395-aa42-2757d6fd2fde\") " pod="kube-system/kube-proxy-8g4x4" Oct 29 00:25:07.819803 kubelet[2077]: I1029 00:25:07.819339 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb3dbb86-1e26-4395-aa42-2757d6fd2fde-lib-modules\") pod \"kube-proxy-8g4x4\" (UID: \"cb3dbb86-1e26-4395-aa42-2757d6fd2fde\") " pod="kube-system/kube-proxy-8g4x4" Oct 29 00:25:07.819803 kubelet[2077]: I1029 00:25:07.819357 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-net\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819803 kubelet[2077]: I1029 00:25:07.819388 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-xtables-lock\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819803 kubelet[2077]: I1029 00:25:07.819422 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb3dbb86-1e26-4395-aa42-2757d6fd2fde-xtables-lock\") pod \"kube-proxy-8g4x4\" (UID: \"cb3dbb86-1e26-4395-aa42-2757d6fd2fde\") " pod="kube-system/kube-proxy-8g4x4" Oct 29 00:25:07.819803 kubelet[2077]: I1029 00:25:07.819447 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/181f4f04-c0d6-44e2-b245-8c16cd1315f5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g6b56\" (UID: \"181f4f04-c0d6-44e2-b245-8c16cd1315f5\") " pod="kube-system/cilium-operator-6c4d7847fc-g6b56" Oct 29 00:25:07.819914 kubelet[2077]: I1029 00:25:07.819469 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-bpf-maps\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.819914 kubelet[2077]: I1029 00:25:07.819485 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-run\") pod \"cilium-hmnf8\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " pod="kube-system/cilium-hmnf8" Oct 29 00:25:07.921459 kubelet[2077]: I1029 00:25:07.921422 2077 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 29 00:25:08.054741 kubelet[2077]: E1029 00:25:08.054640 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:08.055557 kubelet[2077]: E1029 00:25:08.055534 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:08.056077 env[1321]: time="2025-10-29T00:25:08.056006021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmnf8,Uid:1738f709-b496-4870-834f-ea4a9dbbd778,Namespace:kube-system,Attempt:0,}" Oct 29 00:25:08.056785 env[1321]: time="2025-10-29T00:25:08.056619381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8g4x4,Uid:cb3dbb86-1e26-4395-aa42-2757d6fd2fde,Namespace:kube-system,Attempt:0,}" Oct 29 00:25:08.081435 kubelet[2077]: E1029 00:25:08.081377 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:08.082971 env[1321]: time="2025-10-29T00:25:08.082033780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g6b56,Uid:181f4f04-c0d6-44e2-b245-8c16cd1315f5,Namespace:kube-system,Attempt:0,}" Oct 29 00:25:08.111716 env[1321]: time="2025-10-29T00:25:08.111634406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:25:08.111857 env[1321]: time="2025-10-29T00:25:08.111721618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:25:08.111857 env[1321]: time="2025-10-29T00:25:08.111751701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:25:08.112089 env[1321]: time="2025-10-29T00:25:08.112046500Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74 pid=2174 runtime=io.containerd.runc.v2 Oct 29 00:25:08.116147 env[1321]: time="2025-10-29T00:25:08.115639529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:25:08.116147 env[1321]: time="2025-10-29T00:25:08.115684695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:25:08.116147 env[1321]: time="2025-10-29T00:25:08.115702657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:25:08.116147 env[1321]: time="2025-10-29T00:25:08.115867679Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0 pid=2195 runtime=io.containerd.runc.v2 Oct 29 00:25:08.116761 env[1321]: time="2025-10-29T00:25:08.116700028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:25:08.116876 env[1321]: time="2025-10-29T00:25:08.116745954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:25:08.116964 env[1321]: time="2025-10-29T00:25:08.116938699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:25:08.117207 env[1321]: time="2025-10-29T00:25:08.117162728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fff8ca897d987281323432f68a74732ffd2e37235b8bb992356dc2d71642f2e0 pid=2196 runtime=io.containerd.runc.v2 Oct 29 00:25:08.169645 env[1321]: time="2025-10-29T00:25:08.169544970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8g4x4,Uid:cb3dbb86-1e26-4395-aa42-2757d6fd2fde,Namespace:kube-system,Attempt:0,} returns sandbox id \"fff8ca897d987281323432f68a74732ffd2e37235b8bb992356dc2d71642f2e0\"" Oct 29 00:25:08.170634 kubelet[2077]: E1029 00:25:08.170611 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:08.174490 env[1321]: time="2025-10-29T00:25:08.174440809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmnf8,Uid:1738f709-b496-4870-834f-ea4a9dbbd778,Namespace:kube-system,Attempt:0,} returns sandbox id \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\"" Oct 29 00:25:08.175054 kubelet[2077]: E1029 00:25:08.175024 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:08.175360 env[1321]: time="2025-10-29T00:25:08.175094495Z" level=info msg="CreateContainer within sandbox \"fff8ca897d987281323432f68a74732ffd2e37235b8bb992356dc2d71642f2e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 00:25:08.179348 env[1321]: time="2025-10-29T00:25:08.179304204Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 29 00:25:08.190102 env[1321]: time="2025-10-29T00:25:08.190051288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g6b56,Uid:181f4f04-c0d6-44e2-b245-8c16cd1315f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74\"" Oct 29 00:25:08.191468 kubelet[2077]: E1029 00:25:08.191321 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:08.204954 env[1321]: time="2025-10-29T00:25:08.204903788Z" level=info msg="CreateContainer within sandbox \"fff8ca897d987281323432f68a74732ffd2e37235b8bb992356dc2d71642f2e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"928a0dfce350015058940c878cf25bd6e5ca30575b6d1c661d89732ebd8e1252\"" Oct 29 00:25:08.205514 env[1321]: time="2025-10-29T00:25:08.205485664Z" level=info msg="StartContainer for \"928a0dfce350015058940c878cf25bd6e5ca30575b6d1c661d89732ebd8e1252\"" Oct 29 00:25:08.262799 env[1321]: time="2025-10-29T00:25:08.262709578Z" level=info msg="StartContainer for \"928a0dfce350015058940c878cf25bd6e5ca30575b6d1c661d89732ebd8e1252\" returns successfully" Oct 29 00:25:08.725840 kubelet[2077]: E1029 00:25:08.725375 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:08.738986 kubelet[2077]: I1029 00:25:08.738844 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8g4x4" podStartSLOduration=1.7388272420000002 podStartE2EDuration="1.738827242s" podCreationTimestamp="2025-10-29 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:25:08.738827122 +0000 UTC m=+8.159098943" watchObservedRunningTime="2025-10-29 00:25:08.738827242 +0000 UTC m=+8.159099144" Oct 29 00:25:09.443320 kubelet[2077]: E1029 00:25:09.443243 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:09.729765 kubelet[2077]: E1029 00:25:09.729501 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:11.737752 kubelet[2077]: E1029 00:25:11.737722 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:11.811006 kubelet[2077]: E1029 00:25:11.810245 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:12.671380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934659888.mount: Deactivated successfully. Oct 29 00:25:12.736273 kubelet[2077]: E1029 00:25:12.735946 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:12.736273 kubelet[2077]: E1029 00:25:12.736207 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:14.930518 update_engine[1310]: I1029 00:25:14.930445 1310 update_attempter.cc:509] Updating boot flags... Oct 29 00:25:15.066219 env[1321]: time="2025-10-29T00:25:15.065437092Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:25:15.068483 env[1321]: time="2025-10-29T00:25:15.068300112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:25:15.069496 env[1321]: time="2025-10-29T00:25:15.069462977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:25:15.070345 env[1321]: time="2025-10-29T00:25:15.070293492Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 29 00:25:15.076494 env[1321]: time="2025-10-29T00:25:15.076461570Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 29 00:25:15.079606 env[1321]: time="2025-10-29T00:25:15.079571372Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 00:25:15.092050 env[1321]: time="2025-10-29T00:25:15.091070373Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\"" Oct 29 00:25:15.094086 env[1321]: time="2025-10-29T00:25:15.094045322Z" level=info msg="StartContainer for \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\"" Oct 29 00:25:15.239133 env[1321]: time="2025-10-29T00:25:15.238843229Z" level=info msg="StartContainer for \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\" returns successfully" Oct 29 00:25:15.256222 env[1321]: time="2025-10-29T00:25:15.256173837Z" level=info msg="shim disconnected" id=33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2 Oct 29 00:25:15.256222 env[1321]: time="2025-10-29T00:25:15.256222642Z" level=warning msg="cleaning up after shim disconnected" id=33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2 namespace=k8s.io Oct 29 00:25:15.256693 env[1321]: time="2025-10-29T00:25:15.256234483Z" level=info msg="cleaning up dead shim" Oct 29 00:25:15.266298 env[1321]: time="2025-10-29T00:25:15.266255230Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:25:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2519 runtime=io.containerd.runc.v2\n" Oct 29 00:25:15.749034 kubelet[2077]: E1029 00:25:15.748975 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:15.751706 env[1321]: time="2025-10-29T00:25:15.751663248Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 29 00:25:15.774507 env[1321]: time="2025-10-29T00:25:15.774459551Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\"" Oct 29 00:25:15.776211 env[1321]: time="2025-10-29T00:25:15.775306188Z" level=info msg="StartContainer for \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\"" Oct 29 00:25:15.835170 env[1321]: time="2025-10-29T00:25:15.835121002Z" level=info msg="StartContainer for \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\" returns successfully" Oct 29 00:25:15.844551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:25:15.845143 systemd[1]: Stopped systemd-sysctl.service. Oct 29 00:25:15.845311 systemd[1]: Stopping systemd-sysctl.service... Oct 29 00:25:15.847156 systemd[1]: Starting systemd-sysctl.service... Oct 29 00:25:15.856155 systemd[1]: Finished systemd-sysctl.service. Oct 29 00:25:15.875781 env[1321]: time="2025-10-29T00:25:15.875729278Z" level=info msg="shim disconnected" id=dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2 Oct 29 00:25:15.875781 env[1321]: time="2025-10-29T00:25:15.875783163Z" level=warning msg="cleaning up after shim disconnected" id=dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2 namespace=k8s.io Oct 29 00:25:15.876007 env[1321]: time="2025-10-29T00:25:15.875793764Z" level=info msg="cleaning up dead shim" Oct 29 00:25:15.882989 env[1321]: time="2025-10-29T00:25:15.882950092Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:25:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2585 runtime=io.containerd.runc.v2\n" Oct 29 00:25:16.088837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2-rootfs.mount: Deactivated successfully. Oct 29 00:25:16.368291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034109667.mount: Deactivated successfully. Oct 29 00:25:16.754914 kubelet[2077]: E1029 00:25:16.754702 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:16.761583 env[1321]: time="2025-10-29T00:25:16.761540880Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 29 00:25:16.792382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196880532.mount: Deactivated successfully. Oct 29 00:25:16.834343 env[1321]: time="2025-10-29T00:25:16.834247942Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\"" Oct 29 00:25:16.840386 env[1321]: time="2025-10-29T00:25:16.837743203Z" level=info msg="StartContainer for \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\"" Oct 29 00:25:17.012495 env[1321]: time="2025-10-29T00:25:17.010817804Z" level=info msg="StartContainer for \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\" returns successfully" Oct 29 00:25:17.050416 env[1321]: time="2025-10-29T00:25:17.050334405Z" level=info msg="shim disconnected" id=cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b Oct 29 00:25:17.050416 env[1321]: time="2025-10-29T00:25:17.050386209Z" level=warning msg="cleaning up after shim disconnected" id=cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b namespace=k8s.io Oct 29 00:25:17.050416 env[1321]: time="2025-10-29T00:25:17.050404410Z" level=info msg="cleaning up dead shim" Oct 29 00:25:17.063724 env[1321]: time="2025-10-29T00:25:17.063664098Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:25:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2642 runtime=io.containerd.runc.v2\n" Oct 29 00:25:17.070835 env[1321]: time="2025-10-29T00:25:17.070782802Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:25:17.072280 env[1321]: time="2025-10-29T00:25:17.072247282Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:25:17.074698 env[1321]: time="2025-10-29T00:25:17.074659240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:25:17.075186 env[1321]: time="2025-10-29T00:25:17.075155520Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 29 00:25:17.081647 env[1321]: time="2025-10-29T00:25:17.081591728Z" level=info msg="CreateContainer within sandbox \"c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 29 00:25:17.094577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547453203.mount: Deactivated successfully. Oct 29 00:25:17.097438 env[1321]: time="2025-10-29T00:25:17.097376463Z" level=info msg="CreateContainer within sandbox \"c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\"" Oct 29 00:25:17.098476 env[1321]: time="2025-10-29T00:25:17.098007355Z" level=info msg="StartContainer for \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\"" Oct 29 00:25:17.160604 env[1321]: time="2025-10-29T00:25:17.160515601Z" level=info msg="StartContainer for \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\" returns successfully" Oct 29 00:25:17.760117 kubelet[2077]: E1029 00:25:17.760056 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:17.761729 kubelet[2077]: E1029 00:25:17.761695 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:17.762080 env[1321]: time="2025-10-29T00:25:17.762023455Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 29 00:25:17.779526 env[1321]: time="2025-10-29T00:25:17.779475926Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\"" Oct 29 00:25:17.780605 env[1321]: time="2025-10-29T00:25:17.780571256Z" level=info msg="StartContainer for \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\"" Oct 29 00:25:17.851770 env[1321]: time="2025-10-29T00:25:17.851716811Z" level=info msg="StartContainer for \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\" returns successfully" Oct 29 00:25:17.871686 env[1321]: time="2025-10-29T00:25:17.871641805Z" level=info msg="shim disconnected" id=e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077 Oct 29 00:25:17.871686 env[1321]: time="2025-10-29T00:25:17.871687169Z" level=warning msg="cleaning up after shim disconnected" id=e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077 namespace=k8s.io Oct 29 00:25:17.871686 env[1321]: time="2025-10-29T00:25:17.871697490Z" level=info msg="cleaning up dead shim" Oct 29 00:25:17.879309 env[1321]: time="2025-10-29T00:25:17.879256710Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:25:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2734 runtime=io.containerd.runc.v2\n" Oct 29 00:25:18.766763 kubelet[2077]: E1029 00:25:18.766729 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:18.768225 kubelet[2077]: E1029 00:25:18.767418 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:18.768827 env[1321]: time="2025-10-29T00:25:18.768791950Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 29 00:25:18.799295 kubelet[2077]: I1029 00:25:18.799017 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g6b56" podStartSLOduration=2.915029599 podStartE2EDuration="11.799000151s" podCreationTimestamp="2025-10-29 00:25:07 +0000 UTC" firstStartedPulling="2025-10-29 00:25:08.192265857 +0000 UTC m=+7.612537678" lastFinishedPulling="2025-10-29 00:25:17.076236449 +0000 UTC m=+16.496508230" observedRunningTime="2025-10-29 00:25:17.825707718 +0000 UTC m=+17.245979539" watchObservedRunningTime="2025-10-29 00:25:18.799000151 +0000 UTC m=+18.219271932" Oct 29 00:25:18.810195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5758228.mount: Deactivated successfully. Oct 29 00:25:18.826820 env[1321]: time="2025-10-29T00:25:18.826763921Z" level=info msg="CreateContainer within sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\"" Oct 29 00:25:18.827507 env[1321]: time="2025-10-29T00:25:18.827473176Z" level=info msg="StartContainer for \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\"" Oct 29 00:25:18.888692 env[1321]: time="2025-10-29T00:25:18.888644877Z" level=info msg="StartContainer for \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\" returns successfully" Oct 29 00:25:19.001824 kubelet[2077]: I1029 00:25:19.001789 2077 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 00:25:19.049447 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Oct 29 00:25:19.111326 kubelet[2077]: I1029 00:25:19.111284 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccb1d83e-e00e-4b75-b9bf-9924fe74943a-config-volume\") pod \"coredns-668d6bf9bc-ntrmn\" (UID: \"ccb1d83e-e00e-4b75-b9bf-9924fe74943a\") " pod="kube-system/coredns-668d6bf9bc-ntrmn" Oct 29 00:25:19.111326 kubelet[2077]: I1029 00:25:19.111328 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8gp\" (UniqueName: \"kubernetes.io/projected/ccb1d83e-e00e-4b75-b9bf-9924fe74943a-kube-api-access-2r8gp\") pod \"coredns-668d6bf9bc-ntrmn\" (UID: \"ccb1d83e-e00e-4b75-b9bf-9924fe74943a\") " pod="kube-system/coredns-668d6bf9bc-ntrmn" Oct 29 00:25:19.111537 kubelet[2077]: I1029 00:25:19.111359 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a04779a-8f35-4638-a972-d1b5fa33b1e1-config-volume\") pod \"coredns-668d6bf9bc-qpjl8\" (UID: \"6a04779a-8f35-4638-a972-d1b5fa33b1e1\") " pod="kube-system/coredns-668d6bf9bc-qpjl8" Oct 29 00:25:19.111537 kubelet[2077]: I1029 00:25:19.111382 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5s9t\" (UniqueName: \"kubernetes.io/projected/6a04779a-8f35-4638-a972-d1b5fa33b1e1-kube-api-access-l5s9t\") pod \"coredns-668d6bf9bc-qpjl8\" (UID: \"6a04779a-8f35-4638-a972-d1b5fa33b1e1\") " pod="kube-system/coredns-668d6bf9bc-qpjl8" Oct 29 00:25:19.309426 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Oct 29 00:25:19.340219 kubelet[2077]: E1029 00:25:19.340150 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:19.341223 env[1321]: time="2025-10-29T00:25:19.341165658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpjl8,Uid:6a04779a-8f35-4638-a972-d1b5fa33b1e1,Namespace:kube-system,Attempt:0,}" Oct 29 00:25:19.347123 kubelet[2077]: E1029 00:25:19.347084 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:19.347733 env[1321]: time="2025-10-29T00:25:19.347686384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ntrmn,Uid:ccb1d83e-e00e-4b75-b9bf-9924fe74943a,Namespace:kube-system,Attempt:0,}" Oct 29 00:25:19.772974 kubelet[2077]: E1029 00:25:19.772941 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:19.814421 kubelet[2077]: I1029 00:25:19.807280 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hmnf8" podStartSLOduration=5.909790628 podStartE2EDuration="12.807262883s" podCreationTimestamp="2025-10-29 00:25:07 +0000 UTC" firstStartedPulling="2025-10-29 00:25:08.178785137 +0000 UTC m=+7.599056958" lastFinishedPulling="2025-10-29 00:25:15.076257392 +0000 UTC m=+14.496529213" observedRunningTime="2025-10-29 00:25:19.806969301 +0000 UTC m=+19.227241202" watchObservedRunningTime="2025-10-29 00:25:19.807262883 +0000 UTC m=+19.227534664" Oct 29 00:25:20.778026 kubelet[2077]: E1029 00:25:20.777976 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:20.970234 systemd-networkd[1101]: cilium_host: Link UP Oct 29 00:25:20.970358 systemd-networkd[1101]: cilium_net: Link UP Oct 29 00:25:20.972502 systemd-networkd[1101]: cilium_net: Gained carrier Oct 29 00:25:20.974885 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Oct 29 00:25:20.974971 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Oct 29 00:25:20.975097 systemd-networkd[1101]: cilium_host: Gained carrier Oct 29 00:25:21.073969 systemd-networkd[1101]: cilium_vxlan: Link UP Oct 29 00:25:21.073976 systemd-networkd[1101]: cilium_vxlan: Gained carrier Oct 29 00:25:21.344440 kernel: NET: Registered PF_ALG protocol family Oct 29 00:25:21.622547 systemd-networkd[1101]: cilium_host: Gained IPv6LL Oct 29 00:25:21.753303 systemd-networkd[1101]: cilium_net: Gained IPv6LL Oct 29 00:25:21.780111 kubelet[2077]: E1029 00:25:21.780064 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:22.025067 systemd-networkd[1101]: lxc_health: Link UP Oct 29 00:25:22.034489 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 29 00:25:22.034519 systemd-networkd[1101]: lxc_health: Gained carrier Oct 29 00:25:22.390558 systemd-networkd[1101]: cilium_vxlan: Gained IPv6LL Oct 29 00:25:22.420352 systemd-networkd[1101]: lxc1412d615ad50: Link UP Oct 29 00:25:22.429425 kernel: eth0: renamed from tmpcbda8 Oct 29 00:25:22.437580 systemd-networkd[1101]: lxcccb13df7234c: Link UP Oct 29 00:25:22.447434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1412d615ad50: link becomes ready Oct 29 00:25:22.448604 systemd-networkd[1101]: lxc1412d615ad50: Gained carrier Oct 29 00:25:22.449465 kernel: eth0: renamed from tmpac9ca Oct 29 00:25:22.461432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcccb13df7234c: link becomes ready Oct 29 00:25:22.462305 systemd-networkd[1101]: lxcccb13df7234c: Gained carrier Oct 29 00:25:22.784176 kubelet[2077]: E1029 00:25:22.783755 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:23.158562 systemd-networkd[1101]: lxc_health: Gained IPv6LL Oct 29 00:25:24.054636 systemd-networkd[1101]: lxcccb13df7234c: Gained IPv6LL Oct 29 00:25:24.310577 systemd-networkd[1101]: lxc1412d615ad50: Gained IPv6LL Oct 29 00:25:26.413702 env[1321]: time="2025-10-29T00:25:26.413624507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:25:26.413702 env[1321]: time="2025-10-29T00:25:26.413668630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:25:26.413702 env[1321]: time="2025-10-29T00:25:26.413678910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:25:26.414111 env[1321]: time="2025-10-29T00:25:26.413811518Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbda869a34837f2bf70da18623f1ee48cf4161fa3dd02255a667f9623de035c9 pid=3317 runtime=io.containerd.runc.v2 Oct 29 00:25:26.418656 env[1321]: time="2025-10-29T00:25:26.418573339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:25:26.418656 env[1321]: time="2025-10-29T00:25:26.418626062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:25:26.418656 env[1321]: time="2025-10-29T00:25:26.418637502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:25:26.419049 env[1321]: time="2025-10-29T00:25:26.419001122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac9caece5ecb180847a5e0164a9e20c1c51f2008edcaab486df2a88c144973cc pid=3324 runtime=io.containerd.runc.v2 Oct 29 00:25:26.453148 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:25:26.458303 systemd-resolved[1241]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:25:26.476545 env[1321]: time="2025-10-29T00:25:26.476493994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpjl8,Uid:6a04779a-8f35-4638-a972-d1b5fa33b1e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac9caece5ecb180847a5e0164a9e20c1c51f2008edcaab486df2a88c144973cc\"" Oct 29 00:25:26.478170 kubelet[2077]: E1029 00:25:26.478139 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:26.486007 env[1321]: time="2025-10-29T00:25:26.485962634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ntrmn,Uid:ccb1d83e-e00e-4b75-b9bf-9924fe74943a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbda869a34837f2bf70da18623f1ee48cf4161fa3dd02255a667f9623de035c9\"" Oct 29 00:25:26.486777 kubelet[2077]: E1029 00:25:26.486744 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:26.487114 env[1321]: time="2025-10-29T00:25:26.487081775Z" level=info msg="CreateContainer within sandbox \"ac9caece5ecb180847a5e0164a9e20c1c51f2008edcaab486df2a88c144973cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:25:26.489067 env[1321]: time="2025-10-29T00:25:26.489020441Z" level=info msg="CreateContainer within sandbox \"cbda869a34837f2bf70da18623f1ee48cf4161fa3dd02255a667f9623de035c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:25:26.505512 env[1321]: time="2025-10-29T00:25:26.505465103Z" level=info msg="CreateContainer within sandbox \"ac9caece5ecb180847a5e0164a9e20c1c51f2008edcaab486df2a88c144973cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3204fd6f414a0757146445fcc7c4eee4a14f87aae1f4b31d5a69bc393f90b83f\"" Oct 29 00:25:26.506199 env[1321]: time="2025-10-29T00:25:26.506171302Z" level=info msg="StartContainer for \"3204fd6f414a0757146445fcc7c4eee4a14f87aae1f4b31d5a69bc393f90b83f\"" Oct 29 00:25:26.521082 env[1321]: time="2025-10-29T00:25:26.521028636Z" level=info msg="CreateContainer within sandbox \"cbda869a34837f2bf70da18623f1ee48cf4161fa3dd02255a667f9623de035c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"092126b2256ee37297fa1efaa3eb49b224309f0be416cb36df7c85212309562e\"" Oct 29 00:25:26.521624 env[1321]: time="2025-10-29T00:25:26.521598708Z" level=info msg="StartContainer for \"092126b2256ee37297fa1efaa3eb49b224309f0be416cb36df7c85212309562e\"" Oct 29 00:25:26.572725 env[1321]: time="2025-10-29T00:25:26.572589183Z" level=info msg="StartContainer for \"3204fd6f414a0757146445fcc7c4eee4a14f87aae1f4b31d5a69bc393f90b83f\" returns successfully" Oct 29 00:25:26.577252 env[1321]: time="2025-10-29T00:25:26.577140153Z" level=info msg="StartContainer for \"092126b2256ee37297fa1efaa3eb49b224309f0be416cb36df7c85212309562e\" returns successfully" Oct 29 00:25:26.790580 kubelet[2077]: E1029 00:25:26.790245 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:26.793270 kubelet[2077]: E1029 00:25:26.793234 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:26.807335 kubelet[2077]: I1029 00:25:26.807265 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ntrmn" podStartSLOduration=19.80724937 podStartE2EDuration="19.80724937s" podCreationTimestamp="2025-10-29 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:25:26.806912712 +0000 UTC m=+26.227184493" watchObservedRunningTime="2025-10-29 00:25:26.80724937 +0000 UTC m=+26.227521191" Oct 29 00:25:26.855474 kubelet[2077]: I1029 00:25:26.855409 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qpjl8" podStartSLOduration=19.855377169 podStartE2EDuration="19.855377169s" podCreationTimestamp="2025-10-29 00:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:25:26.855000308 +0000 UTC m=+26.275272129" watchObservedRunningTime="2025-10-29 00:25:26.855377169 +0000 UTC m=+26.275648990" Oct 29 00:25:27.417837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239370175.mount: Deactivated successfully. Oct 29 00:25:27.794661 kubelet[2077]: E1029 00:25:27.794337 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:27.794661 kubelet[2077]: E1029 00:25:27.794425 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:28.797150 kubelet[2077]: E1029 00:25:28.796716 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:28.797150 kubelet[2077]: E1029 00:25:28.796716 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:31.008538 kubelet[2077]: I1029 00:25:31.008499 2077 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:25:31.009296 kubelet[2077]: E1029 00:25:31.009278 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:31.801408 kubelet[2077]: E1029 00:25:31.801368 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:25:36.161084 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:34452.service. Oct 29 00:25:36.201597 sshd[3464]: Accepted publickey for core from 10.0.0.1 port 34452 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:25:36.204602 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:25:36.211440 systemd-logind[1308]: New session 6 of user core. Oct 29 00:25:36.211766 systemd[1]: Started session-6.scope. Oct 29 00:25:36.347622 sshd[3464]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:36.350809 systemd-logind[1308]: Session 6 logged out. Waiting for processes to exit. Oct 29 00:25:36.351001 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:34452.service: Deactivated successfully. Oct 29 00:25:36.351969 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 00:25:36.352424 systemd-logind[1308]: Removed session 6. Oct 29 00:25:41.351962 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:57444.service. Oct 29 00:25:41.405911 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 57444 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:25:41.409706 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:25:41.415345 systemd-logind[1308]: New session 7 of user core. Oct 29 00:25:41.416018 systemd[1]: Started session-7.scope. Oct 29 00:25:41.552417 sshd[3483]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:41.556141 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:57444.service: Deactivated successfully. Oct 29 00:25:41.557425 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 00:25:41.557546 systemd-logind[1308]: Session 7 logged out. Waiting for processes to exit. Oct 29 00:25:41.559818 systemd-logind[1308]: Removed session 7. Oct 29 00:25:46.557133 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:57448.service. Oct 29 00:25:46.640729 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 57448 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:25:46.642136 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:25:46.646915 systemd-logind[1308]: New session 8 of user core. Oct 29 00:25:46.647218 systemd[1]: Started session-8.scope. Oct 29 00:25:46.782885 sshd[3498]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:46.785782 systemd-logind[1308]: Session 8 logged out. Waiting for processes to exit. Oct 29 00:25:46.786008 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:57448.service: Deactivated successfully. Oct 29 00:25:46.786990 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 00:25:46.788192 systemd-logind[1308]: Removed session 8. Oct 29 00:25:51.787296 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:45864.service. Oct 29 00:25:51.825962 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 45864 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:25:51.829214 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:25:51.839514 systemd-logind[1308]: New session 9 of user core. Oct 29 00:25:51.840008 systemd[1]: Started session-9.scope. Oct 29 00:25:51.989669 sshd[3513]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:51.992662 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:45878.service. Oct 29 00:25:51.994512 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:45864.service: Deactivated successfully. Oct 29 00:25:51.995852 systemd-logind[1308]: Session 9 logged out. Waiting for processes to exit. Oct 29 00:25:51.995917 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 00:25:51.996773 systemd-logind[1308]: Removed session 9. Oct 29 00:25:52.030630 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 45878 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:25:52.032012 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:25:52.036051 systemd-logind[1308]: New session 10 of user core. Oct 29 00:25:52.037563 systemd[1]: Started session-10.scope. Oct 29 00:25:52.225787 sshd[3527]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:52.228099 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:45894.service. Oct 29 00:25:52.232717 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:45878.service: Deactivated successfully. Oct 29 00:25:52.235472 systemd-logind[1308]: Session 10 logged out. Waiting for processes to exit. Oct 29 00:25:52.235544 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 00:25:52.242471 systemd-logind[1308]: Removed session 10. Oct 29 00:25:52.280687 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 45894 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:25:52.282678 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:25:52.286620 systemd-logind[1308]: New session 11 of user core. Oct 29 00:25:52.287498 systemd[1]: Started session-11.scope. Oct 29 00:25:52.415276 sshd[3540]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:52.418016 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:45894.service: Deactivated successfully. Oct 29 00:25:52.419177 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 00:25:52.419651 systemd-logind[1308]: Session 11 logged out. Waiting for processes to exit. Oct 29 00:25:52.420450 systemd-logind[1308]: Removed session 11. Oct 29 00:25:57.417983 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:45906.service. Oct 29 00:25:57.454704 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 45906 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:25:57.456102 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:25:57.459762 systemd-logind[1308]: New session 12 of user core. Oct 29 00:25:57.461018 systemd[1]: Started session-12.scope. Oct 29 00:25:57.607116 sshd[3556]: pam_unix(sshd:session): session closed for user core Oct 29 00:25:57.611205 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:45906.service: Deactivated successfully. Oct 29 00:25:57.613055 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 00:25:57.613707 systemd-logind[1308]: Session 12 logged out. Waiting for processes to exit. Oct 29 00:25:57.615907 systemd-logind[1308]: Removed session 12. Oct 29 00:26:02.612230 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:53290.service. Oct 29 00:26:02.645907 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 53290 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:02.647926 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:02.654032 systemd-logind[1308]: New session 13 of user core. Oct 29 00:26:02.654705 systemd[1]: Started session-13.scope. Oct 29 00:26:02.791855 sshd[3573]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:02.794331 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:53306.service. Oct 29 00:26:02.799795 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:53290.service: Deactivated successfully. Oct 29 00:26:02.800864 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 00:26:02.803055 systemd-logind[1308]: Session 13 logged out. Waiting for processes to exit. Oct 29 00:26:02.804032 systemd-logind[1308]: Removed session 13. Oct 29 00:26:02.831753 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 53306 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:02.833122 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:02.837020 systemd-logind[1308]: New session 14 of user core. Oct 29 00:26:02.838423 systemd[1]: Started session-14.scope. Oct 29 00:26:03.034093 sshd[3586]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:03.036107 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:53318.service. Oct 29 00:26:03.038601 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:53306.service: Deactivated successfully. Oct 29 00:26:03.039520 systemd-logind[1308]: Session 14 logged out. Waiting for processes to exit. Oct 29 00:26:03.039591 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 00:26:03.040563 systemd-logind[1308]: Removed session 14. Oct 29 00:26:03.073884 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 53318 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:03.075802 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:03.079549 systemd-logind[1308]: New session 15 of user core. Oct 29 00:26:03.080502 systemd[1]: Started session-15.scope. Oct 29 00:26:03.675573 sshd[3598]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:03.678096 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:53332.service. Oct 29 00:26:03.680171 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:53318.service: Deactivated successfully. Oct 29 00:26:03.683641 systemd-logind[1308]: Session 15 logged out. Waiting for processes to exit. Oct 29 00:26:03.683710 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 00:26:03.686179 systemd-logind[1308]: Removed session 15. Oct 29 00:26:03.735569 sshd[3616]: Accepted publickey for core from 10.0.0.1 port 53332 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:03.736944 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:03.741062 systemd-logind[1308]: New session 16 of user core. Oct 29 00:26:03.741996 systemd[1]: Started session-16.scope. Oct 29 00:26:03.975617 sshd[3616]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:03.978158 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:53348.service. Oct 29 00:26:03.980799 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:53332.service: Deactivated successfully. Oct 29 00:26:03.984251 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 00:26:03.984523 systemd-logind[1308]: Session 16 logged out. Waiting for processes to exit. Oct 29 00:26:03.985547 systemd-logind[1308]: Removed session 16. Oct 29 00:26:04.017614 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 53348 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:04.019185 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:04.023182 systemd-logind[1308]: New session 17 of user core. Oct 29 00:26:04.025318 systemd[1]: Started session-17.scope. Oct 29 00:26:04.142329 sshd[3631]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:04.145031 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:53348.service: Deactivated successfully. Oct 29 00:26:04.146243 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 00:26:04.146716 systemd-logind[1308]: Session 17 logged out. Waiting for processes to exit. Oct 29 00:26:04.147585 systemd-logind[1308]: Removed session 17. Oct 29 00:26:09.145125 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:53364.service. Oct 29 00:26:09.185775 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 53364 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:09.187913 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:09.195827 systemd-logind[1308]: New session 18 of user core. Oct 29 00:26:09.196871 systemd[1]: Started session-18.scope. Oct 29 00:26:09.312124 sshd[3649]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:09.314653 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:53364.service: Deactivated successfully. Oct 29 00:26:09.315542 systemd-logind[1308]: Session 18 logged out. Waiting for processes to exit. Oct 29 00:26:09.315599 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 00:26:09.316339 systemd-logind[1308]: Removed session 18. Oct 29 00:26:14.319869 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:52958.service. Oct 29 00:26:14.366741 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 52958 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:14.368551 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:14.372934 systemd-logind[1308]: New session 19 of user core. Oct 29 00:26:14.373504 systemd[1]: Started session-19.scope. Oct 29 00:26:14.505875 sshd[3665]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:14.508481 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:52958.service: Deactivated successfully. Oct 29 00:26:14.509570 systemd-logind[1308]: Session 19 logged out. Waiting for processes to exit. Oct 29 00:26:14.509725 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 00:26:14.511068 systemd-logind[1308]: Removed session 19. Oct 29 00:26:16.692932 kubelet[2077]: E1029 00:26:16.692872 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:18.695661 kubelet[2077]: E1029 00:26:18.695623 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:19.513635 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:33326.service. Oct 29 00:26:19.551978 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 33326 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:19.553827 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:19.558901 systemd[1]: Started session-20.scope. Oct 29 00:26:19.560032 systemd-logind[1308]: New session 20 of user core. Oct 29 00:26:19.687801 sshd[3681]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:19.690351 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:33326.service: Deactivated successfully. Oct 29 00:26:19.693891 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 00:26:19.695446 systemd-logind[1308]: Session 20 logged out. Waiting for processes to exit. Oct 29 00:26:19.697149 systemd-logind[1308]: Removed session 20. Oct 29 00:26:20.693180 kubelet[2077]: E1029 00:26:20.693148 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:24.695490 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:33330.service. Oct 29 00:26:24.733047 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 33330 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:24.734637 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:24.741681 systemd[1]: Started session-21.scope. Oct 29 00:26:24.742632 systemd-logind[1308]: New session 21 of user core. Oct 29 00:26:24.883497 sshd[3696]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:24.890199 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:33338.service. Oct 29 00:26:24.892936 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:33330.service: Deactivated successfully. Oct 29 00:26:24.894993 systemd-logind[1308]: Session 21 logged out. Waiting for processes to exit. Oct 29 00:26:24.895153 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 00:26:24.896935 systemd-logind[1308]: Removed session 21. Oct 29 00:26:24.925331 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 33338 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:24.927195 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:24.933121 systemd-logind[1308]: New session 22 of user core. Oct 29 00:26:24.934170 systemd[1]: Started session-22.scope. Oct 29 00:26:27.899265 systemd[1]: run-containerd-runc-k8s.io-1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe-runc.2khagj.mount: Deactivated successfully. Oct 29 00:26:27.910703 env[1321]: time="2025-10-29T00:26:27.910654500Z" level=info msg="StopContainer for \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\" with timeout 30 (s)" Oct 29 00:26:27.911674 env[1321]: time="2025-10-29T00:26:27.911641940Z" level=info msg="Stop container \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\" with signal terminated" Oct 29 00:26:27.920117 env[1321]: time="2025-10-29T00:26:27.920048616Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:26:27.926846 env[1321]: time="2025-10-29T00:26:27.926808093Z" level=info msg="StopContainer for \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\" with timeout 2 (s)" Oct 29 00:26:27.927382 env[1321]: time="2025-10-29T00:26:27.927344773Z" level=info msg="Stop container \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\" with signal terminated" Oct 29 00:26:27.933341 systemd-networkd[1101]: lxc_health: Link DOWN Oct 29 00:26:27.933349 systemd-networkd[1101]: lxc_health: Lost carrier Oct 29 00:26:27.948377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da-rootfs.mount: Deactivated successfully. Oct 29 00:26:27.982437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe-rootfs.mount: Deactivated successfully. Oct 29 00:26:28.018936 env[1321]: time="2025-10-29T00:26:28.018834623Z" level=info msg="shim disconnected" id=c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da Oct 29 00:26:28.018936 env[1321]: time="2025-10-29T00:26:28.018896823Z" level=warning msg="cleaning up after shim disconnected" id=c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da namespace=k8s.io Oct 29 00:26:28.018936 env[1321]: time="2025-10-29T00:26:28.018908303Z" level=info msg="cleaning up dead shim" Oct 29 00:26:28.026195 env[1321]: time="2025-10-29T00:26:28.026131744Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3780 runtime=io.containerd.runc.v2\n" Oct 29 00:26:28.035139 env[1321]: time="2025-10-29T00:26:28.035074544Z" level=info msg="shim disconnected" id=1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe Oct 29 00:26:28.035472 env[1321]: time="2025-10-29T00:26:28.035437984Z" level=warning msg="cleaning up after shim disconnected" id=1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe namespace=k8s.io Oct 29 00:26:28.035583 env[1321]: time="2025-10-29T00:26:28.035567584Z" level=info msg="cleaning up dead shim" Oct 29 00:26:28.043282 env[1321]: time="2025-10-29T00:26:28.043239105Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3792 runtime=io.containerd.runc.v2\n" Oct 29 00:26:28.049603 env[1321]: time="2025-10-29T00:26:28.049556026Z" level=info msg="StopContainer for \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\" returns successfully" Oct 29 00:26:28.050288 env[1321]: time="2025-10-29T00:26:28.050256906Z" level=info msg="StopPodSandbox for \"c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74\"" Oct 29 00:26:28.050576 env[1321]: time="2025-10-29T00:26:28.050551626Z" level=info msg="Container to stop \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:26:28.075376 env[1321]: time="2025-10-29T00:26:28.075326508Z" level=info msg="StopContainer for \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\" returns successfully" Oct 29 00:26:28.076128 env[1321]: time="2025-10-29T00:26:28.076091948Z" level=info msg="StopPodSandbox for \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\"" Oct 29 00:26:28.076394 env[1321]: time="2025-10-29T00:26:28.076362468Z" level=info msg="Container to stop \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:26:28.077155 env[1321]: time="2025-10-29T00:26:28.076537068Z" level=info msg="Container to stop \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:26:28.077155 env[1321]: time="2025-10-29T00:26:28.076557108Z" level=info msg="Container to stop \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:26:28.077155 env[1321]: time="2025-10-29T00:26:28.076569268Z" level=info msg="Container to stop \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:26:28.077155 env[1321]: time="2025-10-29T00:26:28.076580908Z" level=info msg="Container to stop \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:26:28.102033 env[1321]: time="2025-10-29T00:26:28.101973910Z" level=info msg="shim disconnected" id=c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74 Oct 29 00:26:28.102033 env[1321]: time="2025-10-29T00:26:28.102022710Z" level=warning msg="cleaning up after shim disconnected" id=c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74 namespace=k8s.io Oct 29 00:26:28.102033 env[1321]: time="2025-10-29T00:26:28.102032950Z" level=info msg="cleaning up dead shim" Oct 29 00:26:28.104010 env[1321]: time="2025-10-29T00:26:28.103960391Z" level=info msg="shim disconnected" id=74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0 Oct 29 00:26:28.104010 env[1321]: time="2025-10-29T00:26:28.104002591Z" level=warning msg="cleaning up after shim disconnected" id=74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0 namespace=k8s.io Oct 29 00:26:28.104010 env[1321]: time="2025-10-29T00:26:28.104011991Z" level=info msg="cleaning up dead shim" Oct 29 00:26:28.112141 env[1321]: time="2025-10-29T00:26:28.112089711Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3846 runtime=io.containerd.runc.v2\n" Oct 29 00:26:28.112510 env[1321]: time="2025-10-29T00:26:28.112480511Z" level=info msg="TearDown network for sandbox \"c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74\" successfully" Oct 29 00:26:28.112554 env[1321]: time="2025-10-29T00:26:28.112510711Z" level=info msg="StopPodSandbox for \"c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74\" returns successfully" Oct 29 00:26:28.113931 env[1321]: time="2025-10-29T00:26:28.113899032Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3850 runtime=io.containerd.runc.v2\n" Oct 29 00:26:28.115036 env[1321]: time="2025-10-29T00:26:28.115002952Z" level=info msg="TearDown network for sandbox \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" successfully" Oct 29 00:26:28.115757 env[1321]: time="2025-10-29T00:26:28.115724352Z" level=info msg="StopPodSandbox for \"74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0\" returns successfully" Oct 29 00:26:28.287287 kubelet[2077]: I1029 00:26:28.286242 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-run\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.287779 kubelet[2077]: I1029 00:26:28.287752 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1738f709-b496-4870-834f-ea4a9dbbd778-clustermesh-secrets\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.287864 kubelet[2077]: I1029 00:26:28.287850 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-bpf-maps\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.287942 kubelet[2077]: I1029 00:26:28.287927 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/181f4f04-c0d6-44e2-b245-8c16cd1315f5-cilium-config-path\") pod \"181f4f04-c0d6-44e2-b245-8c16cd1315f5\" (UID: \"181f4f04-c0d6-44e2-b245-8c16cd1315f5\") " Oct 29 00:26:28.288093 kubelet[2077]: I1029 00:26:28.286354 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.288152 kubelet[2077]: I1029 00:26:28.288019 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.288152 kubelet[2077]: I1029 00:26:28.288066 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-etc-cni-netd\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288228 kubelet[2077]: I1029 00:26:28.288157 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cni-path\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288228 kubelet[2077]: I1029 00:26:28.288193 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-config-path\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288228 kubelet[2077]: I1029 00:26:28.288217 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-lib-modules\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288303 kubelet[2077]: I1029 00:26:28.288261 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-hubble-tls\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288303 kubelet[2077]: I1029 00:26:28.288280 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-hostproc\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288303 kubelet[2077]: I1029 00:26:28.288297 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvsbt\" (UniqueName: \"kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-kube-api-access-jvsbt\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288375 kubelet[2077]: I1029 00:26:28.288315 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5vbg\" (UniqueName: \"kubernetes.io/projected/181f4f04-c0d6-44e2-b245-8c16cd1315f5-kube-api-access-h5vbg\") pod \"181f4f04-c0d6-44e2-b245-8c16cd1315f5\" (UID: \"181f4f04-c0d6-44e2-b245-8c16cd1315f5\") " Oct 29 00:26:28.288375 kubelet[2077]: I1029 00:26:28.288331 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-cgroup\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288375 kubelet[2077]: I1029 00:26:28.288350 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-net\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288375 kubelet[2077]: I1029 00:26:28.288366 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-kernel\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288494 kubelet[2077]: I1029 00:26:28.288409 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-xtables-lock\") pod \"1738f709-b496-4870-834f-ea4a9dbbd778\" (UID: \"1738f709-b496-4870-834f-ea4a9dbbd778\") " Oct 29 00:26:28.288494 kubelet[2077]: I1029 00:26:28.288464 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.288494 kubelet[2077]: I1029 00:26:28.288473 2077 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.288560 kubelet[2077]: I1029 00:26:28.288507 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.288560 kubelet[2077]: I1029 00:26:28.288527 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cni-path" (OuterVolumeSpecName: "cni-path") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.288650 kubelet[2077]: I1029 00:26:28.288628 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.292560 kubelet[2077]: I1029 00:26:28.292498 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:26:28.292691 kubelet[2077]: I1029 00:26:28.292575 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/181f4f04-c0d6-44e2-b245-8c16cd1315f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "181f4f04-c0d6-44e2-b245-8c16cd1315f5" (UID: "181f4f04-c0d6-44e2-b245-8c16cd1315f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:26:28.292976 kubelet[2077]: I1029 00:26:28.292949 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-kube-api-access-jvsbt" (OuterVolumeSpecName: "kube-api-access-jvsbt") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "kube-api-access-jvsbt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:26:28.293092 kubelet[2077]: I1029 00:26:28.293078 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-hostproc" (OuterVolumeSpecName: "hostproc") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.293165 kubelet[2077]: I1029 00:26:28.293153 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.293258 kubelet[2077]: I1029 00:26:28.293245 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.293332 kubelet[2077]: I1029 00:26:28.293319 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.293433 kubelet[2077]: I1029 00:26:28.293416 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:28.295486 kubelet[2077]: I1029 00:26:28.295433 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181f4f04-c0d6-44e2-b245-8c16cd1315f5-kube-api-access-h5vbg" (OuterVolumeSpecName: "kube-api-access-h5vbg") pod "181f4f04-c0d6-44e2-b245-8c16cd1315f5" (UID: "181f4f04-c0d6-44e2-b245-8c16cd1315f5"). InnerVolumeSpecName "kube-api-access-h5vbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:26:28.295570 kubelet[2077]: I1029 00:26:28.295499 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:26:28.295730 kubelet[2077]: I1029 00:26:28.295700 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1738f709-b496-4870-834f-ea4a9dbbd778-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1738f709-b496-4870-834f-ea4a9dbbd778" (UID: "1738f709-b496-4870-834f-ea4a9dbbd778"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:26:28.389165 kubelet[2077]: I1029 00:26:28.389123 2077 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389384 kubelet[2077]: I1029 00:26:28.389370 2077 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389503 kubelet[2077]: I1029 00:26:28.389491 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389568 kubelet[2077]: I1029 00:26:28.389559 2077 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389627 kubelet[2077]: I1029 00:26:28.389616 2077 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389688 kubelet[2077]: I1029 00:26:28.389678 2077 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389762 kubelet[2077]: I1029 00:26:28.389751 2077 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jvsbt\" (UniqueName: \"kubernetes.io/projected/1738f709-b496-4870-834f-ea4a9dbbd778-kube-api-access-jvsbt\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389825 kubelet[2077]: I1029 00:26:28.389814 2077 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h5vbg\" (UniqueName: \"kubernetes.io/projected/181f4f04-c0d6-44e2-b245-8c16cd1315f5-kube-api-access-h5vbg\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389885 kubelet[2077]: I1029 00:26:28.389874 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.389945 kubelet[2077]: I1029 00:26:28.389934 2077 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.390007 kubelet[2077]: I1029 00:26:28.389996 2077 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.390074 kubelet[2077]: I1029 00:26:28.390063 2077 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1738f709-b496-4870-834f-ea4a9dbbd778-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.390136 kubelet[2077]: I1029 00:26:28.390126 2077 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1738f709-b496-4870-834f-ea4a9dbbd778-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.390207 kubelet[2077]: I1029 00:26:28.390196 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/181f4f04-c0d6-44e2-b245-8c16cd1315f5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:28.693954 kubelet[2077]: E1029 00:26:28.693908 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:28.892702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0-rootfs.mount: Deactivated successfully. Oct 29 00:26:28.893720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74-rootfs.mount: Deactivated successfully. Oct 29 00:26:28.893832 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74aa6998d6057f76912b178c47ac789c984a19702fc99da7d7af009720f513a0-shm.mount: Deactivated successfully. Oct 29 00:26:28.893913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c14a3688d8206afadcee8224d7884aec3425b33482d0779df7b7d8ec74fa7e74-shm.mount: Deactivated successfully. Oct 29 00:26:28.893997 systemd[1]: var-lib-kubelet-pods-1738f709\x2db496\x2d4870\x2d834f\x2dea4a9dbbd778-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvsbt.mount: Deactivated successfully. Oct 29 00:26:28.894084 systemd[1]: var-lib-kubelet-pods-181f4f04\x2dc0d6\x2d44e2\x2db245\x2d8c16cd1315f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5vbg.mount: Deactivated successfully. Oct 29 00:26:28.894185 systemd[1]: var-lib-kubelet-pods-1738f709\x2db496\x2d4870\x2d834f\x2dea4a9dbbd778-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 29 00:26:28.894273 systemd[1]: var-lib-kubelet-pods-1738f709\x2db496\x2d4870\x2d834f\x2dea4a9dbbd778-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 29 00:26:28.947356 kubelet[2077]: I1029 00:26:28.945623 2077 scope.go:117] "RemoveContainer" containerID="1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe" Oct 29 00:26:28.953308 env[1321]: time="2025-10-29T00:26:28.953251708Z" level=info msg="RemoveContainer for \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\"" Oct 29 00:26:28.958001 env[1321]: time="2025-10-29T00:26:28.957954188Z" level=info msg="RemoveContainer for \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\" returns successfully" Oct 29 00:26:28.958605 kubelet[2077]: I1029 00:26:28.958581 2077 scope.go:117] "RemoveContainer" containerID="e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077" Oct 29 00:26:28.960134 env[1321]: time="2025-10-29T00:26:28.960084109Z" level=info msg="RemoveContainer for \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\"" Oct 29 00:26:28.963448 env[1321]: time="2025-10-29T00:26:28.963386429Z" level=info msg="RemoveContainer for \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\" returns successfully" Oct 29 00:26:28.963688 kubelet[2077]: I1029 00:26:28.963662 2077 scope.go:117] "RemoveContainer" containerID="cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b" Oct 29 00:26:28.968450 env[1321]: time="2025-10-29T00:26:28.967888189Z" level=info msg="RemoveContainer for \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\"" Oct 29 00:26:28.973960 env[1321]: time="2025-10-29T00:26:28.973903830Z" level=info msg="RemoveContainer for \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\" returns successfully" Oct 29 00:26:28.974172 kubelet[2077]: I1029 00:26:28.974141 2077 scope.go:117] "RemoveContainer" containerID="dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2" Oct 29 00:26:28.975519 env[1321]: time="2025-10-29T00:26:28.975479430Z" level=info msg="RemoveContainer for \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\"" Oct 29 00:26:28.979576 env[1321]: time="2025-10-29T00:26:28.979417670Z" level=info msg="RemoveContainer for \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\" returns successfully" Oct 29 00:26:28.980007 kubelet[2077]: I1029 00:26:28.979941 2077 scope.go:117] "RemoveContainer" containerID="33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2" Oct 29 00:26:28.982634 env[1321]: time="2025-10-29T00:26:28.982428431Z" level=info msg="RemoveContainer for \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\"" Oct 29 00:26:28.986383 env[1321]: time="2025-10-29T00:26:28.986321671Z" level=info msg="RemoveContainer for \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\" returns successfully" Oct 29 00:26:28.987749 kubelet[2077]: I1029 00:26:28.987696 2077 scope.go:117] "RemoveContainer" containerID="1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe" Oct 29 00:26:28.988082 env[1321]: time="2025-10-29T00:26:28.988003711Z" level=error msg="ContainerStatus for \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\": not found" Oct 29 00:26:28.988248 kubelet[2077]: E1029 00:26:28.988220 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\": not found" containerID="1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe" Oct 29 00:26:28.989540 kubelet[2077]: I1029 00:26:28.989429 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe"} err="failed to get container status \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ece39eb01f690a7812c2aea40da614b76574c0514ce8ce6b58dae93831fabfe\": not found" Oct 29 00:26:28.989653 kubelet[2077]: I1029 00:26:28.989546 2077 scope.go:117] "RemoveContainer" containerID="e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077" Oct 29 00:26:28.989866 env[1321]: time="2025-10-29T00:26:28.989802791Z" level=error msg="ContainerStatus for \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\": not found" Oct 29 00:26:28.990013 kubelet[2077]: E1029 00:26:28.989986 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\": not found" containerID="e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077" Oct 29 00:26:28.990062 kubelet[2077]: I1029 00:26:28.990023 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077"} err="failed to get container status \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\": rpc error: code = NotFound desc = an error occurred when try to find container \"e22c5f66b84247a13dcf870f47168e38662a5991dbb2e24694728ea669bfa077\": not found" Oct 29 00:26:28.990133 kubelet[2077]: I1029 00:26:28.990061 2077 scope.go:117] "RemoveContainer" containerID="cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b" Oct 29 00:26:28.990315 env[1321]: time="2025-10-29T00:26:28.990263391Z" level=error msg="ContainerStatus for \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\": not found" Oct 29 00:26:28.990427 kubelet[2077]: E1029 00:26:28.990408 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\": not found" containerID="cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b" Oct 29 00:26:28.990473 kubelet[2077]: I1029 00:26:28.990434 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b"} err="failed to get container status \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\": rpc error: code = NotFound desc = an error occurred when try to find container \"cef86218e5fb35d6731d89acfabd370b5d11df130599915f1a21b7a37285254b\": not found" Oct 29 00:26:28.990473 kubelet[2077]: I1029 00:26:28.990450 2077 scope.go:117] "RemoveContainer" containerID="dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2" Oct 29 00:26:28.990779 env[1321]: time="2025-10-29T00:26:28.990688471Z" level=error msg="ContainerStatus for \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\": not found" Oct 29 00:26:28.990998 kubelet[2077]: E1029 00:26:28.990968 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\": not found" containerID="dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2" Oct 29 00:26:28.991053 kubelet[2077]: I1029 00:26:28.991000 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2"} err="failed to get container status \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcc2fb3038435b24c54b50e15b9b38fd2dc05239d4c1a426eef7dd2cc75851f2\": not found" Oct 29 00:26:28.991053 kubelet[2077]: I1029 00:26:28.991019 2077 scope.go:117] "RemoveContainer" containerID="33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2" Oct 29 00:26:28.991227 env[1321]: time="2025-10-29T00:26:28.991165791Z" level=error msg="ContainerStatus for \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\": not found" Oct 29 00:26:28.991336 kubelet[2077]: E1029 00:26:28.991314 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\": not found" containerID="33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2" Oct 29 00:26:28.991376 kubelet[2077]: I1029 00:26:28.991341 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2"} err="failed to get container status \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\": rpc error: code = NotFound desc = an error occurred when try to find container \"33f3a993a7c35b97ab5a4e58cc720c27dec818ca6a7fb4d68c27493383f85ab2\": not found" Oct 29 00:26:28.991376 kubelet[2077]: I1029 00:26:28.991360 2077 scope.go:117] "RemoveContainer" containerID="c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da" Oct 29 00:26:28.992519 env[1321]: time="2025-10-29T00:26:28.992472991Z" level=info msg="RemoveContainer for \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\"" Oct 29 00:26:28.995563 env[1321]: time="2025-10-29T00:26:28.995498272Z" level=info msg="RemoveContainer for \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\" returns successfully" Oct 29 00:26:28.995750 kubelet[2077]: I1029 00:26:28.995707 2077 scope.go:117] "RemoveContainer" containerID="c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da" Oct 29 00:26:28.996012 env[1321]: time="2025-10-29T00:26:28.995952072Z" level=error msg="ContainerStatus for \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\": not found" Oct 29 00:26:28.996153 kubelet[2077]: E1029 00:26:28.996132 2077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\": not found" containerID="c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da" Oct 29 00:26:28.996210 kubelet[2077]: I1029 00:26:28.996165 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da"} err="failed to get container status \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1b06c0018fa3759812a516ebdbc04deefd86626cffcc67a9d3ab550e23598da\": not found" Oct 29 00:26:29.837755 sshd[3710]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:29.840193 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:57010.service. Oct 29 00:26:29.840751 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:33338.service: Deactivated successfully. Oct 29 00:26:29.841866 systemd-logind[1308]: Session 22 logged out. Waiting for processes to exit. Oct 29 00:26:29.841954 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 00:26:29.845968 systemd-logind[1308]: Removed session 22. Oct 29 00:26:29.879934 sshd[3875]: Accepted publickey for core from 10.0.0.1 port 57010 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:29.881686 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:29.885189 systemd-logind[1308]: New session 23 of user core. Oct 29 00:26:29.886049 systemd[1]: Started session-23.scope. Oct 29 00:26:30.695316 kubelet[2077]: I1029 00:26:30.695267 2077 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1738f709-b496-4870-834f-ea4a9dbbd778" path="/var/lib/kubelet/pods/1738f709-b496-4870-834f-ea4a9dbbd778/volumes" Oct 29 00:26:30.695953 kubelet[2077]: I1029 00:26:30.695908 2077 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181f4f04-c0d6-44e2-b245-8c16cd1315f5" path="/var/lib/kubelet/pods/181f4f04-c0d6-44e2-b245-8c16cd1315f5/volumes" Oct 29 00:26:30.756715 kubelet[2077]: E1029 00:26:30.756670 2077 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 29 00:26:31.296286 sshd[3875]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:31.298891 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:57014.service. Oct 29 00:26:31.310617 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:57010.service: Deactivated successfully. Oct 29 00:26:31.315721 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 00:26:31.318208 systemd-logind[1308]: Session 23 logged out. Waiting for processes to exit. Oct 29 00:26:31.321556 kubelet[2077]: I1029 00:26:31.319862 2077 memory_manager.go:355] "RemoveStaleState removing state" podUID="1738f709-b496-4870-834f-ea4a9dbbd778" containerName="cilium-agent" Oct 29 00:26:31.321556 kubelet[2077]: I1029 00:26:31.319894 2077 memory_manager.go:355] "RemoveStaleState removing state" podUID="181f4f04-c0d6-44e2-b245-8c16cd1315f5" containerName="cilium-operator" Oct 29 00:26:31.328578 systemd-logind[1308]: Removed session 23. Oct 29 00:26:31.363488 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 57014 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:31.365003 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:31.368300 systemd-logind[1308]: New session 24 of user core. Oct 29 00:26:31.369135 systemd[1]: Started session-24.scope. Oct 29 00:26:31.409662 kubelet[2077]: I1029 00:26:31.409604 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-config-path\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409662 kubelet[2077]: I1029 00:26:31.409657 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hubble-tls\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409824 kubelet[2077]: I1029 00:26:31.409715 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-xtables-lock\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409824 kubelet[2077]: I1029 00:26:31.409738 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slfm\" (UniqueName: \"kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-kube-api-access-6slfm\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409824 kubelet[2077]: I1029 00:26:31.409784 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-ipsec-secrets\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409824 kubelet[2077]: I1029 00:26:31.409801 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-etc-cni-netd\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409950 kubelet[2077]: I1029 00:26:31.409842 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-clustermesh-secrets\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409950 kubelet[2077]: I1029 00:26:31.409863 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cni-path\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409950 kubelet[2077]: I1029 00:26:31.409881 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hostproc\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409950 kubelet[2077]: I1029 00:26:31.409920 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-cgroup\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409950 kubelet[2077]: I1029 00:26:31.409936 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-lib-modules\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.409950 kubelet[2077]: I1029 00:26:31.409953 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-net\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.410078 kubelet[2077]: I1029 00:26:31.409990 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-run\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.410078 kubelet[2077]: I1029 00:26:31.410005 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-bpf-maps\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.410078 kubelet[2077]: I1029 00:26:31.410033 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-kernel\") pod \"cilium-x6k2k\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " pod="kube-system/cilium-x6k2k" Oct 29 00:26:31.494920 sshd[3888]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:31.497335 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:57028.service. Oct 29 00:26:31.501028 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:57014.service: Deactivated successfully. Oct 29 00:26:31.501952 systemd-logind[1308]: Session 24 logged out. Waiting for processes to exit. Oct 29 00:26:31.502006 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 00:26:31.502663 systemd-logind[1308]: Removed session 24. Oct 29 00:26:31.509430 kubelet[2077]: E1029 00:26:31.509327 2077 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-6slfm lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-x6k2k" podUID="d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" Oct 29 00:26:31.544880 sshd[3902]: Accepted publickey for core from 10.0.0.1 port 57028 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:26:31.546271 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:26:31.549704 systemd-logind[1308]: New session 25 of user core. Oct 29 00:26:31.550829 systemd[1]: Started session-25.scope. Oct 29 00:26:32.120120 kubelet[2077]: I1029 00:26:32.120068 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hubble-tls\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120120 kubelet[2077]: I1029 00:26:32.120115 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6slfm\" (UniqueName: \"kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-kube-api-access-6slfm\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120558 kubelet[2077]: I1029 00:26:32.120141 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-bpf-maps\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120558 kubelet[2077]: I1029 00:26:32.120166 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-config-path\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120558 kubelet[2077]: I1029 00:26:32.120185 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cni-path\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120558 kubelet[2077]: I1029 00:26:32.120203 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hostproc\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120558 kubelet[2077]: I1029 00:26:32.120219 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-cgroup\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120558 kubelet[2077]: I1029 00:26:32.120242 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-net\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120694 kubelet[2077]: I1029 00:26:32.120259 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-kernel\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120694 kubelet[2077]: I1029 00:26:32.120277 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-lib-modules\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120694 kubelet[2077]: I1029 00:26:32.120291 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-run\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120694 kubelet[2077]: I1029 00:26:32.120311 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-ipsec-secrets\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120694 kubelet[2077]: I1029 00:26:32.120329 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-xtables-lock\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120694 kubelet[2077]: I1029 00:26:32.120343 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-etc-cni-netd\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120820 kubelet[2077]: I1029 00:26:32.120361 2077 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-clustermesh-secrets\") pod \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\" (UID: \"d0d3c90d-818c-476f-a4b7-ebe7bb2d63db\") " Oct 29 00:26:32.120820 kubelet[2077]: I1029 00:26:32.120705 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.123316 kubelet[2077]: I1029 00:26:32.123284 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.123420 kubelet[2077]: I1029 00:26:32.123326 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.123420 kubelet[2077]: I1029 00:26:32.123343 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.123420 kubelet[2077]: I1029 00:26:32.123360 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.124476 systemd[1]: var-lib-kubelet-pods-d0d3c90d\x2d818c\x2d476f\x2da4b7\x2debe7bb2d63db-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 29 00:26:32.124628 systemd[1]: var-lib-kubelet-pods-d0d3c90d\x2d818c\x2d476f\x2da4b7\x2debe7bb2d63db-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 29 00:26:32.125003 kubelet[2077]: I1029 00:26:32.124960 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:26:32.125275 kubelet[2077]: I1029 00:26:32.125249 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.125325 kubelet[2077]: I1029 00:26:32.125283 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cni-path" (OuterVolumeSpecName: "cni-path") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.125325 kubelet[2077]: I1029 00:26:32.125300 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hostproc" (OuterVolumeSpecName: "hostproc") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.125325 kubelet[2077]: I1029 00:26:32.125321 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.125639 kubelet[2077]: I1029 00:26:32.125606 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:26:32.125695 kubelet[2077]: I1029 00:26:32.125662 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:26:32.128016 kubelet[2077]: I1029 00:26:32.127974 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:26:32.130638 kubelet[2077]: I1029 00:26:32.130596 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-kube-api-access-6slfm" (OuterVolumeSpecName: "kube-api-access-6slfm") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "kube-api-access-6slfm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:26:32.131248 kubelet[2077]: I1029 00:26:32.131206 2077 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" (UID: "d0d3c90d-818c-476f-a4b7-ebe7bb2d63db"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:26:32.220923 kubelet[2077]: I1029 00:26:32.220875 2077 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.220923 kubelet[2077]: I1029 00:26:32.220909 2077 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6slfm\" (UniqueName: \"kubernetes.io/projected/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-kube-api-access-6slfm\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.220923 kubelet[2077]: I1029 00:26:32.220929 2077 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220938 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220946 2077 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220954 2077 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220962 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220969 2077 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220977 2077 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220985 2077 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221127 kubelet[2077]: I1029 00:26:32.220992 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221309 kubelet[2077]: I1029 00:26:32.221000 2077 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221309 kubelet[2077]: I1029 00:26:32.221008 2077 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221309 kubelet[2077]: I1029 00:26:32.221015 2077 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.221309 kubelet[2077]: I1029 00:26:32.221022 2077 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 29 00:26:32.521264 systemd[1]: var-lib-kubelet-pods-d0d3c90d\x2d818c\x2d476f\x2da4b7\x2debe7bb2d63db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6slfm.mount: Deactivated successfully. Oct 29 00:26:32.521434 systemd[1]: var-lib-kubelet-pods-d0d3c90d\x2d818c\x2d476f\x2da4b7\x2debe7bb2d63db-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 29 00:26:32.752929 kubelet[2077]: I1029 00:26:32.752875 2077 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-29T00:26:32Z","lastTransitionTime":"2025-10-29T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 29 00:26:33.126067 kubelet[2077]: I1029 00:26:33.126026 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8090460a-0fb7-415a-9bcf-f771e50cf4df-cilium-ipsec-secrets\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.126649 kubelet[2077]: I1029 00:26:33.126626 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-cilium-run\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.126735 kubelet[2077]: I1029 00:26:33.126721 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8090460a-0fb7-415a-9bcf-f771e50cf4df-cilium-config-path\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.126805 kubelet[2077]: I1029 00:26:33.126791 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-host-proc-sys-net\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.126897 kubelet[2077]: I1029 00:26:33.126882 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hndw2\" (UniqueName: \"kubernetes.io/projected/8090460a-0fb7-415a-9bcf-f771e50cf4df-kube-api-access-hndw2\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.126982 kubelet[2077]: I1029 00:26:33.126969 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-hostproc\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127054 kubelet[2077]: I1029 00:26:33.127043 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-bpf-maps\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127118 kubelet[2077]: I1029 00:26:33.127107 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-cilium-cgroup\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127209 kubelet[2077]: I1029 00:26:33.127197 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8090460a-0fb7-415a-9bcf-f771e50cf4df-hubble-tls\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127337 kubelet[2077]: I1029 00:26:33.127289 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-etc-cni-netd\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127377 kubelet[2077]: I1029 00:26:33.127337 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-lib-modules\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127377 kubelet[2077]: I1029 00:26:33.127360 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-xtables-lock\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127443 kubelet[2077]: I1029 00:26:33.127383 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-cni-path\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127443 kubelet[2077]: I1029 00:26:33.127410 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8090460a-0fb7-415a-9bcf-f771e50cf4df-clustermesh-secrets\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.127504 kubelet[2077]: I1029 00:26:33.127448 2077 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8090460a-0fb7-415a-9bcf-f771e50cf4df-host-proc-sys-kernel\") pod \"cilium-57dd9\" (UID: \"8090460a-0fb7-415a-9bcf-f771e50cf4df\") " pod="kube-system/cilium-57dd9" Oct 29 00:26:33.304922 kubelet[2077]: E1029 00:26:33.304883 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:33.306770 env[1321]: time="2025-10-29T00:26:33.306384997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57dd9,Uid:8090460a-0fb7-415a-9bcf-f771e50cf4df,Namespace:kube-system,Attempt:0,}" Oct 29 00:26:33.320192 env[1321]: time="2025-10-29T00:26:33.320092950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:26:33.320192 env[1321]: time="2025-10-29T00:26:33.320160831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:26:33.320485 env[1321]: time="2025-10-29T00:26:33.320448471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:26:33.320778 env[1321]: time="2025-10-29T00:26:33.320738232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4 pid=3935 runtime=io.containerd.runc.v2 Oct 29 00:26:33.359966 env[1321]: time="2025-10-29T00:26:33.359913689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57dd9,Uid:8090460a-0fb7-415a-9bcf-f771e50cf4df,Namespace:kube-system,Attempt:0,} returns sandbox id \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\"" Oct 29 00:26:33.360653 kubelet[2077]: E1029 00:26:33.360624 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:33.364702 env[1321]: time="2025-10-29T00:26:33.364651781Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 00:26:33.376627 env[1321]: time="2025-10-29T00:26:33.376492290Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"42d7e107b7ce19f6afb38a4b509f40482d55a6ded375035abea209b4a05e9eb1\"" Oct 29 00:26:33.378361 env[1321]: time="2025-10-29T00:26:33.378158774Z" level=info msg="StartContainer for \"42d7e107b7ce19f6afb38a4b509f40482d55a6ded375035abea209b4a05e9eb1\"" Oct 29 00:26:33.429999 env[1321]: time="2025-10-29T00:26:33.429951943Z" level=info msg="StartContainer for \"42d7e107b7ce19f6afb38a4b509f40482d55a6ded375035abea209b4a05e9eb1\" returns successfully" Oct 29 00:26:33.463350 env[1321]: time="2025-10-29T00:26:33.463302105Z" level=info msg="shim disconnected" id=42d7e107b7ce19f6afb38a4b509f40482d55a6ded375035abea209b4a05e9eb1 Oct 29 00:26:33.463350 env[1321]: time="2025-10-29T00:26:33.463349025Z" level=warning msg="cleaning up after shim disconnected" id=42d7e107b7ce19f6afb38a4b509f40482d55a6ded375035abea209b4a05e9eb1 namespace=k8s.io Oct 29 00:26:33.463350 env[1321]: time="2025-10-29T00:26:33.463360465Z" level=info msg="cleaning up dead shim" Oct 29 00:26:33.470384 env[1321]: time="2025-10-29T00:26:33.470341323Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4019 runtime=io.containerd.runc.v2\n" Oct 29 00:26:33.967016 kubelet[2077]: E1029 00:26:33.966853 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:33.969732 env[1321]: time="2025-10-29T00:26:33.969686320Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 29 00:26:33.983758 env[1321]: time="2025-10-29T00:26:33.983140593Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f44a1a7a10ac76d9db4376725a24c2796cce7a071f2c488c92699c38739e5ea2\"" Oct 29 00:26:33.984190 env[1321]: time="2025-10-29T00:26:33.984154876Z" level=info msg="StartContainer for \"f44a1a7a10ac76d9db4376725a24c2796cce7a071f2c488c92699c38739e5ea2\"" Oct 29 00:26:34.034884 env[1321]: time="2025-10-29T00:26:34.034827456Z" level=info msg="StartContainer for \"f44a1a7a10ac76d9db4376725a24c2796cce7a071f2c488c92699c38739e5ea2\" returns successfully" Oct 29 00:26:34.058488 env[1321]: time="2025-10-29T00:26:34.058440125Z" level=info msg="shim disconnected" id=f44a1a7a10ac76d9db4376725a24c2796cce7a071f2c488c92699c38739e5ea2 Oct 29 00:26:34.058488 env[1321]: time="2025-10-29T00:26:34.058487365Z" level=warning msg="cleaning up after shim disconnected" id=f44a1a7a10ac76d9db4376725a24c2796cce7a071f2c488c92699c38739e5ea2 namespace=k8s.io Oct 29 00:26:34.058739 env[1321]: time="2025-10-29T00:26:34.058497445Z" level=info msg="cleaning up dead shim" Oct 29 00:26:34.064869 env[1321]: time="2025-10-29T00:26:34.064828264Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4081 runtime=io.containerd.runc.v2\n" Oct 29 00:26:34.521722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f44a1a7a10ac76d9db4376725a24c2796cce7a071f2c488c92699c38739e5ea2-rootfs.mount: Deactivated successfully. Oct 29 00:26:34.695325 kubelet[2077]: I1029 00:26:34.695118 2077 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d3c90d-818c-476f-a4b7-ebe7bb2d63db" path="/var/lib/kubelet/pods/d0d3c90d-818c-476f-a4b7-ebe7bb2d63db/volumes" Oct 29 00:26:34.971931 kubelet[2077]: E1029 00:26:34.971900 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:34.975016 env[1321]: time="2025-10-29T00:26:34.974972073Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 29 00:26:34.991092 env[1321]: time="2025-10-29T00:26:34.991046680Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0194294d520ad2cb85e7c47c2f51b3fa0228d156cb5b27f7062c2f4910c2883f\"" Oct 29 00:26:34.991970 env[1321]: time="2025-10-29T00:26:34.991933443Z" level=info msg="StartContainer for \"0194294d520ad2cb85e7c47c2f51b3fa0228d156cb5b27f7062c2f4910c2883f\"" Oct 29 00:26:35.045110 env[1321]: time="2025-10-29T00:26:35.044968736Z" level=info msg="StartContainer for \"0194294d520ad2cb85e7c47c2f51b3fa0228d156cb5b27f7062c2f4910c2883f\" returns successfully" Oct 29 00:26:35.067071 env[1321]: time="2025-10-29T00:26:35.067026969Z" level=info msg="shim disconnected" id=0194294d520ad2cb85e7c47c2f51b3fa0228d156cb5b27f7062c2f4910c2883f Oct 29 00:26:35.067071 env[1321]: time="2025-10-29T00:26:35.067073289Z" level=warning msg="cleaning up after shim disconnected" id=0194294d520ad2cb85e7c47c2f51b3fa0228d156cb5b27f7062c2f4910c2883f namespace=k8s.io Oct 29 00:26:35.067374 env[1321]: time="2025-10-29T00:26:35.067082129Z" level=info msg="cleaning up dead shim" Oct 29 00:26:35.073720 env[1321]: time="2025-10-29T00:26:35.073677351Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4137 runtime=io.containerd.runc.v2\n" Oct 29 00:26:35.521761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0194294d520ad2cb85e7c47c2f51b3fa0228d156cb5b27f7062c2f4910c2883f-rootfs.mount: Deactivated successfully. Oct 29 00:26:35.758010 kubelet[2077]: E1029 00:26:35.757971 2077 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 29 00:26:35.976208 kubelet[2077]: E1029 00:26:35.974966 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:35.976938 env[1321]: time="2025-10-29T00:26:35.976872600Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 29 00:26:35.995650 env[1321]: time="2025-10-29T00:26:35.995590103Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"61fa1bcd79fde1fed84cf2260c314eadc8ccbc8c9b7c0c5f92e0f587a03dc4fe\"" Oct 29 00:26:35.999593 env[1321]: time="2025-10-29T00:26:35.998632633Z" level=info msg="StartContainer for \"61fa1bcd79fde1fed84cf2260c314eadc8ccbc8c9b7c0c5f92e0f587a03dc4fe\"" Oct 29 00:26:36.051852 env[1321]: time="2025-10-29T00:26:36.051806431Z" level=info msg="StartContainer for \"61fa1bcd79fde1fed84cf2260c314eadc8ccbc8c9b7c0c5f92e0f587a03dc4fe\" returns successfully" Oct 29 00:26:36.071209 env[1321]: time="2025-10-29T00:26:36.071073823Z" level=info msg="shim disconnected" id=61fa1bcd79fde1fed84cf2260c314eadc8ccbc8c9b7c0c5f92e0f587a03dc4fe Oct 29 00:26:36.071209 env[1321]: time="2025-10-29T00:26:36.071211783Z" level=warning msg="cleaning up after shim disconnected" id=61fa1bcd79fde1fed84cf2260c314eadc8ccbc8c9b7c0c5f92e0f587a03dc4fe namespace=k8s.io Oct 29 00:26:36.071442 env[1321]: time="2025-10-29T00:26:36.071226063Z" level=info msg="cleaning up dead shim" Oct 29 00:26:36.078580 env[1321]: time="2025-10-29T00:26:36.078541011Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:26:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Oct 29 00:26:36.521677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61fa1bcd79fde1fed84cf2260c314eadc8ccbc8c9b7c0c5f92e0f587a03dc4fe-rootfs.mount: Deactivated successfully. Oct 29 00:26:36.693708 kubelet[2077]: E1029 00:26:36.693650 2077 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ntrmn" podUID="ccb1d83e-e00e-4b75-b9bf-9924fe74943a" Oct 29 00:26:36.978495 kubelet[2077]: E1029 00:26:36.978464 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:36.983315 env[1321]: time="2025-10-29T00:26:36.983268153Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 29 00:26:36.998366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910714244.mount: Deactivated successfully. Oct 29 00:26:37.001003 env[1321]: time="2025-10-29T00:26:37.000956739Z" level=info msg="CreateContainer within sandbox \"45879bb0c85c46a7ee91ea13afb0c98dc5ff29ab0661f599856c6680dbdb62e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ff2f3134e691b3255b74b22cf6a7f5217d6f4307742e59462d6662517d172fa7\"" Oct 29 00:26:37.024431 env[1321]: time="2025-10-29T00:26:37.014635036Z" level=info msg="StartContainer for \"ff2f3134e691b3255b74b22cf6a7f5217d6f4307742e59462d6662517d172fa7\"" Oct 29 00:26:37.086023 env[1321]: time="2025-10-29T00:26:37.085976170Z" level=info msg="StartContainer for \"ff2f3134e691b3255b74b22cf6a7f5217d6f4307742e59462d6662517d172fa7\" returns successfully" Oct 29 00:26:37.341955 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Oct 29 00:26:37.986655 kubelet[2077]: E1029 00:26:37.986603 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:38.004660 kubelet[2077]: I1029 00:26:38.004223 2077 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-57dd9" podStartSLOduration=6.004206326 podStartE2EDuration="6.004206326s" podCreationTimestamp="2025-10-29 00:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:26:38.003914485 +0000 UTC m=+97.424186306" watchObservedRunningTime="2025-10-29 00:26:38.004206326 +0000 UTC m=+97.424478147" Oct 29 00:26:38.694415 kubelet[2077]: E1029 00:26:38.693443 2077 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qpjl8" podUID="6a04779a-8f35-4638-a972-d1b5fa33b1e1" Oct 29 00:26:38.694415 kubelet[2077]: E1029 00:26:38.693551 2077 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ntrmn" podUID="ccb1d83e-e00e-4b75-b9bf-9924fe74943a" Oct 29 00:26:39.305915 kubelet[2077]: E1029 00:26:39.305868 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:40.273712 systemd-networkd[1101]: lxc_health: Link UP Oct 29 00:26:40.282026 systemd-networkd[1101]: lxc_health: Gained carrier Oct 29 00:26:40.292422 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 29 00:26:40.694047 kubelet[2077]: E1029 00:26:40.692777 2077 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qpjl8" podUID="6a04779a-8f35-4638-a972-d1b5fa33b1e1" Oct 29 00:26:40.694047 kubelet[2077]: E1029 00:26:40.693196 2077 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ntrmn" podUID="ccb1d83e-e00e-4b75-b9bf-9924fe74943a" Oct 29 00:26:41.307004 kubelet[2077]: E1029 00:26:41.306955 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:41.996151 kubelet[2077]: E1029 00:26:41.996093 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:42.198565 systemd-networkd[1101]: lxc_health: Gained IPv6LL Oct 29 00:26:42.698860 kubelet[2077]: E1029 00:26:42.693714 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:42.698860 kubelet[2077]: E1029 00:26:42.693803 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:42.997707 kubelet[2077]: E1029 00:26:42.997619 2077 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:26:44.079422 systemd[1]: run-containerd-runc-k8s.io-ff2f3134e691b3255b74b22cf6a7f5217d6f4307742e59462d6662517d172fa7-runc.BTddlT.mount: Deactivated successfully. Oct 29 00:26:46.230595 systemd[1]: run-containerd-runc-k8s.io-ff2f3134e691b3255b74b22cf6a7f5217d6f4307742e59462d6662517d172fa7-runc.uFCapd.mount: Deactivated successfully. Oct 29 00:26:46.296636 sshd[3902]: pam_unix(sshd:session): session closed for user core Oct 29 00:26:46.299983 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:57028.service: Deactivated successfully. Oct 29 00:26:46.300926 systemd-logind[1308]: Session 25 logged out. Waiting for processes to exit. Oct 29 00:26:46.300966 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 00:26:46.302740 systemd-logind[1308]: Removed session 25.