Sep 9 00:46:11.679754 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:46:11.679773 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Sep 8 23:23:23 -00 2025 Sep 9 00:46:11.679781 kernel: efi: EFI v2.70 by EDK II Sep 9 00:46:11.679787 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 9 00:46:11.679792 kernel: random: crng init done Sep 9 00:46:11.679798 kernel: ACPI: Early table checksum verification disabled Sep 9 00:46:11.679804 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 9 00:46:11.679811 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:46:11.679817 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679822 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679828 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679833 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679838 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679844 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679852 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679858 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679864 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:46:11.679869 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:46:11.679875 kernel: NUMA: Failed to initialise from firmware Sep 9 00:46:11.679881 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:46:11.679887 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 9 00:46:11.679902 kernel: Zone ranges: Sep 9 00:46:11.679909 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:46:11.679916 kernel: DMA32 empty Sep 9 00:46:11.679922 kernel: Normal empty Sep 9 00:46:11.679928 kernel: Movable zone start for each node Sep 9 00:46:11.679933 kernel: Early memory node ranges Sep 9 00:46:11.679939 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 9 00:46:11.679945 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 9 00:46:11.679951 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 9 00:46:11.679956 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 9 00:46:11.679962 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 9 00:46:11.679968 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 9 00:46:11.679973 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 9 00:46:11.679979 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:46:11.679986 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:46:11.679992 kernel: psci: probing for conduit method from ACPI. Sep 9 00:46:11.679997 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:46:11.680003 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:46:11.680009 kernel: psci: Trusted OS migration not required Sep 9 00:46:11.680017 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:46:11.680023 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:46:11.680031 kernel: ACPI: SRAT not present Sep 9 00:46:11.680037 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 9 00:46:11.680044 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 9 00:46:11.680050 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:46:11.680056 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:46:11.680062 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:46:11.680068 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:46:11.680074 kernel: CPU features: detected: Spectre-v4 Sep 9 00:46:11.680080 kernel: CPU features: detected: Spectre-BHB Sep 9 00:46:11.680087 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:46:11.680094 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:46:11.680100 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:46:11.680106 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:46:11.680112 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:46:11.680118 kernel: Policy zone: DMA Sep 9 00:46:11.680125 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:46:11.680132 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:46:11.680138 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:46:11.680144 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:46:11.680150 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:46:11.680157 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 9 00:46:11.680164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:46:11.680170 kernel: trace event string verifier disabled Sep 9 00:46:11.680176 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:46:11.680183 kernel: rcu: RCU event tracing is enabled. Sep 9 00:46:11.680189 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:46:11.680195 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:46:11.680201 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:46:11.680208 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:46:11.680214 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:46:11.680220 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:46:11.680227 kernel: GICv3: 256 SPIs implemented Sep 9 00:46:11.680233 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:46:11.680239 kernel: GICv3: Distributor has no Range Selector support Sep 9 00:46:11.680245 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:46:11.680251 kernel: GICv3: 16 PPIs implemented Sep 9 00:46:11.680257 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:46:11.680263 kernel: ACPI: SRAT not present Sep 9 00:46:11.680269 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:46:11.680275 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:46:11.680282 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:46:11.680288 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 9 00:46:11.680294 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 9 00:46:11.680301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:46:11.680308 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:46:11.680314 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:46:11.680320 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:46:11.680326 kernel: arm-pv: using stolen time PV Sep 9 00:46:11.680333 kernel: Console: colour dummy device 80x25 Sep 9 00:46:11.680339 kernel: ACPI: Core revision 20210730 Sep 9 00:46:11.680346 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:46:11.680352 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:46:11.680359 kernel: LSM: Security Framework initializing Sep 9 00:46:11.680366 kernel: SELinux: Initializing. Sep 9 00:46:11.680373 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:46:11.680379 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:46:11.680385 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:46:11.680391 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:46:11.680398 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:46:11.680404 kernel: Remapping and enabling EFI services. Sep 9 00:46:11.680410 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:46:11.680416 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:46:11.680424 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:46:11.680431 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 9 00:46:11.680437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:46:11.680443 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:46:11.680449 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:46:11.680456 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:46:11.680462 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 9 00:46:11.680507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:46:11.680513 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:46:11.680520 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:46:11.680528 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:46:11.680534 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 9 00:46:11.680541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:46:11.680547 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:46:11.680558 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:46:11.680566 kernel: SMP: Total of 4 processors activated. Sep 9 00:46:11.680573 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:46:11.680580 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:46:11.680587 kernel: CPU features: detected: Common not Private translations Sep 9 00:46:11.680593 kernel: CPU features: detected: CRC32 instructions Sep 9 00:46:11.680600 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:46:11.680607 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:46:11.680615 kernel: CPU features: detected: Privileged Access Never Sep 9 00:46:11.680622 kernel: CPU features: detected: RAS Extension Support Sep 9 00:46:11.680628 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:46:11.680635 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:46:11.680641 kernel: alternatives: patching kernel code Sep 9 00:46:11.680649 kernel: devtmpfs: initialized Sep 9 00:46:11.680656 kernel: KASLR enabled Sep 9 00:46:11.680663 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:46:11.680669 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:46:11.680676 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:46:11.680683 kernel: SMBIOS 3.0.0 present. Sep 9 00:46:11.680689 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 9 00:46:11.680696 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:46:11.680702 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:46:11.680710 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:46:11.680717 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:46:11.680724 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:46:11.680731 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Sep 9 00:46:11.680737 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:46:11.680744 kernel: cpuidle: using governor menu Sep 9 00:46:11.680750 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:46:11.680757 kernel: ASID allocator initialised with 32768 entries Sep 9 00:46:11.680764 kernel: ACPI: bus type PCI registered Sep 9 00:46:11.680771 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:46:11.680778 kernel: Serial: AMBA PL011 UART driver Sep 9 00:46:11.680785 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:46:11.680791 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:46:11.680798 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:46:11.680805 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:46:11.680811 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:46:11.680818 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:46:11.680825 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:46:11.680832 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:46:11.680839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:46:11.680846 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 9 00:46:11.680852 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 9 00:46:11.680859 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 9 00:46:11.680865 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:46:11.680872 kernel: ACPI: Interpreter enabled Sep 9 00:46:11.680879 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:46:11.680885 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:46:11.680899 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:46:11.680906 kernel: printk: console [ttyAMA0] enabled Sep 9 00:46:11.680912 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:46:11.681065 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:46:11.681131 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:46:11.681191 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:46:11.681252 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:46:11.681315 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:46:11.681324 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:46:11.681331 kernel: PCI host bridge to bus 0000:00 Sep 9 00:46:11.681396 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:46:11.681452 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:46:11.681536 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:46:11.681593 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:46:11.681670 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:46:11.681742 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:46:11.681806 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:46:11.681867 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:46:11.681940 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:46:11.682008 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:46:11.682073 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:46:11.682144 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:46:11.682212 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:46:11.682272 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:46:11.682330 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:46:11.682339 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:46:11.682346 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:46:11.682352 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:46:11.682359 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:46:11.682368 kernel: iommu: Default domain type: Translated Sep 9 00:46:11.682374 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:46:11.682381 kernel: vgaarb: loaded Sep 9 00:46:11.682388 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 00:46:11.682395 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 00:46:11.682402 kernel: PTP clock support registered Sep 9 00:46:11.682409 kernel: Registered efivars operations Sep 9 00:46:11.682415 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:46:11.682422 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:46:11.682430 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:46:11.682437 kernel: pnp: PnP ACPI init Sep 9 00:46:11.682537 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:46:11.682548 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:46:11.682555 kernel: NET: Registered PF_INET protocol family Sep 9 00:46:11.682562 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:46:11.682569 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:46:11.682575 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:46:11.682585 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:46:11.682592 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 9 00:46:11.682599 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:46:11.682613 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:46:11.682619 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:46:11.682626 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:46:11.682633 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:46:11.682640 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:46:11.682647 kernel: kvm [1]: HYP mode not available Sep 9 00:46:11.682655 kernel: Initialise system trusted keyrings Sep 9 00:46:11.682828 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:46:11.682838 kernel: Key type asymmetric registered Sep 9 00:46:11.682845 kernel: Asymmetric key parser 'x509' registered Sep 9 00:46:11.682851 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 00:46:11.682858 kernel: io scheduler mq-deadline registered Sep 9 00:46:11.682865 kernel: io scheduler kyber registered Sep 9 00:46:11.682872 kernel: io scheduler bfq registered Sep 9 00:46:11.682879 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:46:11.682898 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:46:11.682907 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:46:11.682990 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:46:11.683001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:46:11.683007 kernel: thunder_xcv, ver 1.0 Sep 9 00:46:11.683523 kernel: thunder_bgx, ver 1.0 Sep 9 00:46:11.683531 kernel: nicpf, ver 1.0 Sep 9 00:46:11.683538 kernel: nicvf, ver 1.0 Sep 9 00:46:11.683650 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:46:11.683718 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:46:11 UTC (1757378771) Sep 9 00:46:11.683727 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:46:11.683734 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:46:11.683740 kernel: Segment Routing with IPv6 Sep 9 00:46:11.683747 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:46:11.683754 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:46:11.683761 kernel: Key type dns_resolver registered Sep 9 00:46:11.683767 kernel: registered taskstats version 1 Sep 9 00:46:11.683776 kernel: Loading compiled-in X.509 certificates Sep 9 00:46:11.683783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 14b3f28443a1a4b809c7c0337ab8c3dc8fdb5252' Sep 9 00:46:11.683790 kernel: Key type .fscrypt registered Sep 9 00:46:11.683796 kernel: Key type fscrypt-provisioning registered Sep 9 00:46:11.683803 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:46:11.683810 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:46:11.683816 kernel: ima: No architecture policies found Sep 9 00:46:11.683823 kernel: clk: Disabling unused clocks Sep 9 00:46:11.683830 kernel: Freeing unused kernel memory: 36416K Sep 9 00:46:11.683838 kernel: Run /init as init process Sep 9 00:46:11.683844 kernel: with arguments: Sep 9 00:46:11.683851 kernel: /init Sep 9 00:46:11.683858 kernel: with environment: Sep 9 00:46:11.683864 kernel: HOME=/ Sep 9 00:46:11.683871 kernel: TERM=linux Sep 9 00:46:11.683877 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:46:11.683886 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:46:11.683910 systemd[1]: Detected virtualization kvm. Sep 9 00:46:11.683918 systemd[1]: Detected architecture arm64. Sep 9 00:46:11.683925 systemd[1]: Running in initrd. Sep 9 00:46:11.683932 systemd[1]: No hostname configured, using default hostname. Sep 9 00:46:11.683939 systemd[1]: Hostname set to . Sep 9 00:46:11.683946 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:46:11.683953 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:46:11.683960 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:46:11.683969 systemd[1]: Reached target cryptsetup.target. Sep 9 00:46:11.683976 systemd[1]: Reached target paths.target. Sep 9 00:46:11.683983 systemd[1]: Reached target slices.target. Sep 9 00:46:11.683990 systemd[1]: Reached target swap.target. Sep 9 00:46:11.683997 systemd[1]: Reached target timers.target. Sep 9 00:46:11.684005 systemd[1]: Listening on iscsid.socket. Sep 9 00:46:11.684012 systemd[1]: Listening on iscsiuio.socket. Sep 9 00:46:11.684020 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:46:11.684028 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:46:11.684035 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:46:11.684042 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:46:11.684049 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:46:11.684057 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:46:11.684064 systemd[1]: Reached target sockets.target. Sep 9 00:46:11.684071 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:46:11.684078 systemd[1]: Finished network-cleanup.service. Sep 9 00:46:11.684086 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:46:11.684093 systemd[1]: Starting systemd-journald.service... Sep 9 00:46:11.684101 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:46:11.684108 systemd[1]: Starting systemd-resolved.service... Sep 9 00:46:11.684115 systemd[1]: Starting systemd-vconsole-setup.service... Sep 9 00:46:11.684122 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:46:11.684129 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:46:11.684137 kernel: audit: type=1130 audit(1757378771.678:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.684145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:46:11.684156 systemd-journald[290]: Journal started Sep 9 00:46:11.684199 systemd-journald[290]: Runtime Journal (/run/log/journal/b431df1f6312474d9f5280109f1ebd4e) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:46:11.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.682588 systemd-modules-load[291]: Inserted module 'overlay' Sep 9 00:46:11.689537 systemd[1]: Started systemd-journald.service. Sep 9 00:46:11.689574 kernel: audit: type=1130 audit(1757378771.688:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.688883 systemd[1]: Finished systemd-vconsole-setup.service. Sep 9 00:46:11.693881 kernel: audit: type=1130 audit(1757378771.690:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.691286 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:46:11.697526 kernel: audit: type=1130 audit(1757378771.694:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.695332 systemd[1]: Starting dracut-cmdline-ask.service... Sep 9 00:46:11.696388 systemd-resolved[292]: Positive Trust Anchors: Sep 9 00:46:11.696394 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:46:11.696421 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:46:11.711333 kernel: audit: type=1130 audit(1757378771.702:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.700585 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 9 00:46:11.701882 systemd[1]: Started systemd-resolved.service. Sep 9 00:46:11.702730 systemd[1]: Reached target nss-lookup.target. Sep 9 00:46:11.715052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:46:11.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.716442 systemd[1]: Finished dracut-cmdline-ask.service. Sep 9 00:46:11.721499 kernel: audit: type=1130 audit(1757378771.716:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.721518 kernel: Bridge firewalling registered Sep 9 00:46:11.718103 systemd[1]: Starting dracut-cmdline.service... Sep 9 00:46:11.719726 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 9 00:46:11.727170 dracut-cmdline[308]: dracut-dracut-053 Sep 9 00:46:11.729341 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:46:11.734492 kernel: SCSI subsystem initialized Sep 9 00:46:11.741526 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:46:11.741564 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:46:11.742489 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 9 00:46:11.744601 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 9 00:46:11.745420 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:46:11.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.749570 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:46:11.750825 kernel: audit: type=1130 audit(1757378771.746:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.756665 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:46:11.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.760503 kernel: audit: type=1130 audit(1757378771.756:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.791497 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:46:11.804496 kernel: iscsi: registered transport (tcp) Sep 9 00:46:11.817544 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:46:11.817563 kernel: QLogic iSCSI HBA Driver Sep 9 00:46:11.852216 systemd[1]: Finished dracut-cmdline.service. Sep 9 00:46:11.853743 systemd[1]: Starting dracut-pre-udev.service... Sep 9 00:46:11.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.857495 kernel: audit: type=1130 audit(1757378771.852:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:11.896493 kernel: raid6: neonx8 gen() 13724 MB/s Sep 9 00:46:11.913481 kernel: raid6: neonx8 xor() 10820 MB/s Sep 9 00:46:11.930485 kernel: raid6: neonx4 gen() 13495 MB/s Sep 9 00:46:11.947489 kernel: raid6: neonx4 xor() 11050 MB/s Sep 9 00:46:11.964489 kernel: raid6: neonx2 gen() 12940 MB/s Sep 9 00:46:11.981493 kernel: raid6: neonx2 xor() 10238 MB/s Sep 9 00:46:11.998489 kernel: raid6: neonx1 gen() 10568 MB/s Sep 9 00:46:12.015491 kernel: raid6: neonx1 xor() 8781 MB/s Sep 9 00:46:12.032480 kernel: raid6: int64x8 gen() 6268 MB/s Sep 9 00:46:12.049492 kernel: raid6: int64x8 xor() 3544 MB/s Sep 9 00:46:12.066490 kernel: raid6: int64x4 gen() 7220 MB/s Sep 9 00:46:12.083490 kernel: raid6: int64x4 xor() 3848 MB/s Sep 9 00:46:12.100490 kernel: raid6: int64x2 gen() 6150 MB/s Sep 9 00:46:12.117490 kernel: raid6: int64x2 xor() 3321 MB/s Sep 9 00:46:12.134488 kernel: raid6: int64x1 gen() 5043 MB/s Sep 9 00:46:12.151751 kernel: raid6: int64x1 xor() 2646 MB/s Sep 9 00:46:12.151773 kernel: raid6: using algorithm neonx8 gen() 13724 MB/s Sep 9 00:46:12.151791 kernel: raid6: .... xor() 10820 MB/s, rmw enabled Sep 9 00:46:12.151813 kernel: raid6: using neon recovery algorithm Sep 9 00:46:12.162553 kernel: xor: measuring software checksum speed Sep 9 00:46:12.162574 kernel: 8regs : 17202 MB/sec Sep 9 00:46:12.163577 kernel: 32regs : 20691 MB/sec Sep 9 00:46:12.163588 kernel: arm64_neon : 27851 MB/sec Sep 9 00:46:12.163596 kernel: xor: using function: arm64_neon (27851 MB/sec) Sep 9 00:46:12.216491 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 9 00:46:12.226917 systemd[1]: Finished dracut-pre-udev.service. Sep 9 00:46:12.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:12.227000 audit: BPF prog-id=7 op=LOAD Sep 9 00:46:12.227000 audit: BPF prog-id=8 op=LOAD Sep 9 00:46:12.228676 systemd[1]: Starting systemd-udevd.service... Sep 9 00:46:12.240794 systemd-udevd[493]: Using default interface naming scheme 'v252'. Sep 9 00:46:12.244250 systemd[1]: Started systemd-udevd.service. Sep 9 00:46:12.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:12.246181 systemd[1]: Starting dracut-pre-trigger.service... Sep 9 00:46:12.257343 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Sep 9 00:46:12.285327 systemd[1]: Finished dracut-pre-trigger.service. Sep 9 00:46:12.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:12.287067 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:46:12.326527 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:46:12.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:12.357963 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:46:12.367590 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:46:12.367618 kernel: GPT:9289727 != 19775487 Sep 9 00:46:12.367629 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:46:12.367638 kernel: GPT:9289727 != 19775487 Sep 9 00:46:12.367646 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:46:12.367655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:46:12.391503 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (551) Sep 9 00:46:12.394135 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 9 00:46:12.397618 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 9 00:46:12.402610 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 9 00:46:12.403614 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 9 00:46:12.407944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:46:12.409721 systemd[1]: Starting disk-uuid.service... Sep 9 00:46:12.421195 disk-uuid[563]: Primary Header is updated. Sep 9 00:46:12.421195 disk-uuid[563]: Secondary Entries is updated. Sep 9 00:46:12.421195 disk-uuid[563]: Secondary Header is updated. Sep 9 00:46:12.425488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:46:12.429497 kernel: GPT:disk_guids don't match. Sep 9 00:46:12.429520 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:46:12.429537 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:46:12.433493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:46:13.432487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:46:13.432754 disk-uuid[564]: The operation has completed successfully. Sep 9 00:46:13.455340 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:46:13.455434 systemd[1]: Finished disk-uuid.service. Sep 9 00:46:13.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.459292 systemd[1]: Starting verity-setup.service... Sep 9 00:46:13.474497 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:46:13.503499 systemd[1]: Found device dev-mapper-usr.device. Sep 9 00:46:13.504867 systemd[1]: Mounting sysusr-usr.mount... Sep 9 00:46:13.506683 systemd[1]: Finished verity-setup.service. Sep 9 00:46:13.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.553490 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 9 00:46:13.553863 systemd[1]: Mounted sysusr-usr.mount. Sep 9 00:46:13.554534 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 9 00:46:13.555252 systemd[1]: Starting ignition-setup.service... Sep 9 00:46:13.560128 systemd[1]: Starting parse-ip-for-networkd.service... Sep 9 00:46:13.567996 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:46:13.568033 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:46:13.568043 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:46:13.580760 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:46:13.589610 systemd[1]: Finished ignition-setup.service. Sep 9 00:46:13.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.592931 systemd[1]: Starting ignition-fetch-offline.service... Sep 9 00:46:13.662876 systemd[1]: Finished parse-ip-for-networkd.service. Sep 9 00:46:13.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.663072 ignition[655]: Ignition 2.14.0 Sep 9 00:46:13.663000 audit: BPF prog-id=9 op=LOAD Sep 9 00:46:13.664852 systemd[1]: Starting systemd-networkd.service... Sep 9 00:46:13.663080 ignition[655]: Stage: fetch-offline Sep 9 00:46:13.663118 ignition[655]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:46:13.663127 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:46:13.663253 ignition[655]: parsed url from cmdline: "" Sep 9 00:46:13.663256 ignition[655]: no config URL provided Sep 9 00:46:13.663261 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:46:13.663269 ignition[655]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:46:13.663287 ignition[655]: op(1): [started] loading QEMU firmware config module Sep 9 00:46:13.663292 ignition[655]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:46:13.676923 ignition[655]: op(1): [finished] loading QEMU firmware config module Sep 9 00:46:13.676944 ignition[655]: QEMU firmware config was not found. Ignoring... Sep 9 00:46:13.692941 systemd-networkd[740]: lo: Link UP Sep 9 00:46:13.692952 systemd-networkd[740]: lo: Gained carrier Sep 9 00:46:13.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.693346 systemd-networkd[740]: Enumeration completed Sep 9 00:46:13.693439 systemd[1]: Started systemd-networkd.service. Sep 9 00:46:13.694699 systemd[1]: Reached target network.target. Sep 9 00:46:13.696743 systemd[1]: Starting iscsiuio.service... Sep 9 00:46:13.698183 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:46:13.700837 systemd-networkd[740]: eth0: Link UP Sep 9 00:46:13.700840 systemd-networkd[740]: eth0: Gained carrier Sep 9 00:46:13.705040 systemd[1]: Started iscsiuio.service. Sep 9 00:46:13.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.706793 systemd[1]: Starting iscsid.service... Sep 9 00:46:13.710893 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:46:13.710893 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 9 00:46:13.710893 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 9 00:46:13.710893 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 9 00:46:13.710893 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:46:13.710893 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 9 00:46:13.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.713788 systemd[1]: Started iscsid.service. Sep 9 00:46:13.718406 systemd[1]: Starting dracut-initqueue.service... Sep 9 00:46:13.720248 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:46:13.729087 systemd[1]: Finished dracut-initqueue.service. Sep 9 00:46:13.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.730159 systemd[1]: Reached target remote-fs-pre.target. Sep 9 00:46:13.731506 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:46:13.733206 systemd[1]: Reached target remote-fs.target. Sep 9 00:46:13.735518 systemd[1]: Starting dracut-pre-mount.service... Sep 9 00:46:13.743425 systemd[1]: Finished dracut-pre-mount.service. Sep 9 00:46:13.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.743972 ignition[655]: parsing config with SHA512: 078e59266a8a59078b33aefacd4d57566a7ab778201f18b297e00a82974d003b180d1e0c5ba13381524e5b4541b9f43be9383521ec8a51146b4d6022a55b4edc Sep 9 00:46:13.753135 unknown[655]: fetched base config from "system" Sep 9 00:46:13.753150 unknown[655]: fetched user config from "qemu" Sep 9 00:46:13.754095 ignition[655]: fetch-offline: fetch-offline passed Sep 9 00:46:13.755660 systemd[1]: Finished ignition-fetch-offline.service. Sep 9 00:46:13.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.754158 ignition[655]: Ignition finished successfully Sep 9 00:46:13.757052 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:46:13.757809 systemd[1]: Starting ignition-kargs.service... Sep 9 00:46:13.767191 ignition[761]: Ignition 2.14.0 Sep 9 00:46:13.767208 ignition[761]: Stage: kargs Sep 9 00:46:13.767297 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:46:13.767307 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:46:13.768461 ignition[761]: kargs: kargs passed Sep 9 00:46:13.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.769862 systemd[1]: Finished ignition-kargs.service. Sep 9 00:46:13.768523 ignition[761]: Ignition finished successfully Sep 9 00:46:13.771979 systemd[1]: Starting ignition-disks.service... Sep 9 00:46:13.778338 ignition[767]: Ignition 2.14.0 Sep 9 00:46:13.778349 ignition[767]: Stage: disks Sep 9 00:46:13.778434 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:46:13.780681 systemd[1]: Finished ignition-disks.service. Sep 9 00:46:13.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.778445 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:46:13.781568 systemd[1]: Reached target initrd-root-device.target. Sep 9 00:46:13.779267 ignition[767]: disks: disks passed Sep 9 00:46:13.782857 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:46:13.779309 ignition[767]: Ignition finished successfully Sep 9 00:46:13.784276 systemd[1]: Reached target local-fs.target. Sep 9 00:46:13.785607 systemd[1]: Reached target sysinit.target. Sep 9 00:46:13.786683 systemd[1]: Reached target basic.target. Sep 9 00:46:13.788691 systemd[1]: Starting systemd-fsck-root.service... Sep 9 00:46:13.800297 systemd-fsck[776]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 9 00:46:13.804041 systemd[1]: Finished systemd-fsck-root.service. Sep 9 00:46:13.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.806087 systemd[1]: Mounting sysroot.mount... Sep 9 00:46:13.811360 systemd[1]: Mounted sysroot.mount. Sep 9 00:46:13.812506 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 9 00:46:13.812162 systemd[1]: Reached target initrd-root-fs.target. Sep 9 00:46:13.814278 systemd[1]: Mounting sysroot-usr.mount... Sep 9 00:46:13.815183 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 9 00:46:13.815217 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:46:13.815241 systemd[1]: Reached target ignition-diskful.target. Sep 9 00:46:13.817008 systemd[1]: Mounted sysroot-usr.mount. Sep 9 00:46:13.818723 systemd[1]: Starting initrd-setup-root.service... Sep 9 00:46:13.823007 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:46:13.827672 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:46:13.832502 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:46:13.836419 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:46:13.864551 systemd[1]: Finished initrd-setup-root.service. Sep 9 00:46:13.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.866177 systemd[1]: Starting ignition-mount.service... Sep 9 00:46:13.867562 systemd[1]: Starting sysroot-boot.service... Sep 9 00:46:13.872178 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Sep 9 00:46:13.881913 ignition[828]: INFO : Ignition 2.14.0 Sep 9 00:46:13.881913 ignition[828]: INFO : Stage: mount Sep 9 00:46:13.883723 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:46:13.883723 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:46:13.883723 ignition[828]: INFO : mount: mount passed Sep 9 00:46:13.883723 ignition[828]: INFO : Ignition finished successfully Sep 9 00:46:13.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.885762 systemd[1]: Finished ignition-mount.service. Sep 9 00:46:13.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:13.887429 systemd[1]: Finished sysroot-boot.service. Sep 9 00:46:14.513820 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 9 00:46:14.520945 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (837) Sep 9 00:46:14.520989 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:46:14.520999 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:46:14.521909 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:46:14.525428 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 9 00:46:14.526989 systemd[1]: Starting ignition-files.service... Sep 9 00:46:14.540678 ignition[857]: INFO : Ignition 2.14.0 Sep 9 00:46:14.540678 ignition[857]: INFO : Stage: files Sep 9 00:46:14.541974 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:46:14.541974 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:46:14.541974 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:46:14.545931 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:46:14.545931 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:46:14.550719 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:46:14.550719 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:46:14.550719 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:46:14.550719 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 00:46:14.550719 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 00:46:14.548513 unknown[857]: wrote ssh authorized keys file for user: core Sep 9 00:46:14.608844 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:46:14.908253 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 00:46:14.909964 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:46:14.911281 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 00:46:15.102242 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:46:15.239063 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:46:15.239063 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:46:15.241891 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 00:46:15.530994 systemd-networkd[740]: eth0: Gained IPv6LL Sep 9 00:46:15.599829 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:46:16.457583 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:46:16.457583 ignition[857]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:46:16.460247 ignition[857]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:46:16.461907 ignition[857]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:46:16.461907 ignition[857]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:46:16.461907 ignition[857]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:46:16.465481 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:46:16.465481 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:46:16.465481 ignition[857]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:46:16.465481 ignition[857]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:46:16.465481 ignition[857]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:46:16.498485 ignition[857]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:46:16.498485 ignition[857]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:46:16.498485 ignition[857]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:46:16.498485 ignition[857]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:46:16.498485 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:46:16.510609 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 9 00:46:16.510631 kernel: audit: type=1130 audit(1757378776.501:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.510699 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:46:16.510699 ignition[857]: INFO : files: files passed Sep 9 00:46:16.510699 ignition[857]: INFO : Ignition finished successfully Sep 9 00:46:16.516492 kernel: audit: type=1130 audit(1757378776.510:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.500794 systemd[1]: Finished ignition-files.service. Sep 9 00:46:16.503704 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 9 00:46:16.522557 kernel: audit: type=1130 audit(1757378776.517:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.522576 kernel: audit: type=1131 audit(1757378776.517:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.522712 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 9 00:46:16.507344 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 9 00:46:16.525535 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:46:16.508045 systemd[1]: Starting ignition-quench.service... Sep 9 00:46:16.510145 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 9 00:46:16.511425 systemd[1]: Reached target ignition-complete.target. Sep 9 00:46:16.515928 systemd[1]: Starting initrd-parse-etc.service... Sep 9 00:46:16.517205 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:46:16.517286 systemd[1]: Finished ignition-quench.service. Sep 9 00:46:16.529735 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:46:16.529832 systemd[1]: Finished initrd-parse-etc.service. Sep 9 00:46:16.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.534584 systemd[1]: Reached target initrd-fs.target. Sep 9 00:46:16.538841 kernel: audit: type=1130 audit(1757378776.533:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.538860 kernel: audit: type=1131 audit(1757378776.533:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.539485 systemd[1]: Reached target initrd.target. Sep 9 00:46:16.540071 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 9 00:46:16.540811 systemd[1]: Starting dracut-pre-pivot.service... Sep 9 00:46:16.551125 systemd[1]: Finished dracut-pre-pivot.service. Sep 9 00:46:16.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.552548 systemd[1]: Starting initrd-cleanup.service... Sep 9 00:46:16.555752 kernel: audit: type=1130 audit(1757378776.551:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.561018 systemd[1]: Stopped target nss-lookup.target. Sep 9 00:46:16.561998 systemd[1]: Stopped target remote-cryptsetup.target. Sep 9 00:46:16.563113 systemd[1]: Stopped target timers.target. Sep 9 00:46:16.564208 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:46:16.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.564309 systemd[1]: Stopped dracut-pre-pivot.service. Sep 9 00:46:16.568726 kernel: audit: type=1131 audit(1757378776.564:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.565367 systemd[1]: Stopped target initrd.target. Sep 9 00:46:16.568326 systemd[1]: Stopped target basic.target. Sep 9 00:46:16.569278 systemd[1]: Stopped target ignition-complete.target. Sep 9 00:46:16.570324 systemd[1]: Stopped target ignition-diskful.target. Sep 9 00:46:16.572146 systemd[1]: Stopped target initrd-root-device.target. Sep 9 00:46:16.573480 systemd[1]: Stopped target remote-fs.target. Sep 9 00:46:16.574661 systemd[1]: Stopped target remote-fs-pre.target. Sep 9 00:46:16.576477 systemd[1]: Stopped target sysinit.target. Sep 9 00:46:16.577746 systemd[1]: Stopped target local-fs.target. Sep 9 00:46:16.579457 systemd[1]: Stopped target local-fs-pre.target. Sep 9 00:46:16.580628 systemd[1]: Stopped target swap.target. Sep 9 00:46:16.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.581630 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:46:16.586758 kernel: audit: type=1131 audit(1757378776.582:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.581747 systemd[1]: Stopped dracut-pre-mount.service. Sep 9 00:46:16.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.583223 systemd[1]: Stopped target cryptsetup.target. Sep 9 00:46:16.590998 kernel: audit: type=1131 audit(1757378776.586:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.586158 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:46:16.586265 systemd[1]: Stopped dracut-initqueue.service. Sep 9 00:46:16.587418 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:46:16.587526 systemd[1]: Stopped ignition-fetch-offline.service. Sep 9 00:46:16.590584 systemd[1]: Stopped target paths.target. Sep 9 00:46:16.591544 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:46:16.597542 systemd[1]: Stopped systemd-ask-password-console.path. Sep 9 00:46:16.598378 systemd[1]: Stopped target slices.target. Sep 9 00:46:16.599446 systemd[1]: Stopped target sockets.target. Sep 9 00:46:16.600478 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:46:16.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.600591 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 9 00:46:16.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.602057 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:46:16.602146 systemd[1]: Stopped ignition-files.service. Sep 9 00:46:16.604309 systemd[1]: Stopping ignition-mount.service... Sep 9 00:46:16.605312 systemd[1]: Stopping iscsid.service... Sep 9 00:46:16.606019 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:46:16.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.608351 iscsid[746]: iscsid shutting down. Sep 9 00:46:16.606118 systemd[1]: Stopped kmod-static-nodes.service. Sep 9 00:46:16.612302 ignition[897]: INFO : Ignition 2.14.0 Sep 9 00:46:16.612302 ignition[897]: INFO : Stage: umount Sep 9 00:46:16.613494 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:46:16.613494 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:46:16.615052 ignition[897]: INFO : umount: umount passed Sep 9 00:46:16.615052 ignition[897]: INFO : Ignition finished successfully Sep 9 00:46:16.615411 systemd[1]: Stopping sysroot-boot.service... Sep 9 00:46:16.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.616078 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:46:16.616215 systemd[1]: Stopped systemd-udev-trigger.service. Sep 9 00:46:16.617307 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:46:16.617396 systemd[1]: Stopped dracut-pre-trigger.service. Sep 9 00:46:16.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.620726 systemd[1]: iscsid.service: Deactivated successfully. Sep 9 00:46:16.620819 systemd[1]: Stopped iscsid.service. Sep 9 00:46:16.621725 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:46:16.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.621803 systemd[1]: Stopped ignition-mount.service. Sep 9 00:46:16.624428 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:46:16.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.624514 systemd[1]: Closed iscsid.socket. Sep 9 00:46:16.626090 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:46:16.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.626133 systemd[1]: Stopped ignition-disks.service. Sep 9 00:46:16.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.627223 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:46:16.627261 systemd[1]: Stopped ignition-kargs.service. Sep 9 00:46:16.630370 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:46:16.630412 systemd[1]: Stopped ignition-setup.service. Sep 9 00:46:16.631325 systemd[1]: Stopping iscsiuio.service... Sep 9 00:46:16.634945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:46:16.635385 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:46:16.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.635483 systemd[1]: Finished initrd-cleanup.service. Sep 9 00:46:16.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.636789 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 9 00:46:16.636882 systemd[1]: Stopped iscsiuio.service. Sep 9 00:46:16.639369 systemd[1]: Stopped target network.target. Sep 9 00:46:16.640185 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:46:16.640218 systemd[1]: Closed iscsiuio.socket. Sep 9 00:46:16.641167 systemd[1]: Stopping systemd-networkd.service... Sep 9 00:46:16.642305 systemd[1]: Stopping systemd-resolved.service... Sep 9 00:46:16.650834 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:46:16.650960 systemd[1]: Stopped systemd-resolved.service. Sep 9 00:46:16.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.653544 systemd-networkd[740]: eth0: DHCPv6 lease lost Sep 9 00:46:16.654589 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:46:16.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.654689 systemd[1]: Stopped systemd-networkd.service. Sep 9 00:46:16.657000 audit: BPF prog-id=6 op=UNLOAD Sep 9 00:46:16.655723 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:46:16.655753 systemd[1]: Closed systemd-networkd.socket. Sep 9 00:46:16.658923 systemd[1]: Stopping network-cleanup.service... Sep 9 00:46:16.659515 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:46:16.659573 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 9 00:46:16.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.661833 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:46:16.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.661888 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:46:16.663641 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:46:16.666000 audit: BPF prog-id=9 op=UNLOAD Sep 9 00:46:16.663680 systemd[1]: Stopped systemd-modules-load.service. Sep 9 00:46:16.664407 systemd[1]: Stopping systemd-udevd.service... Sep 9 00:46:16.668655 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:46:16.672395 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:46:16.673335 systemd[1]: Stopped network-cleanup.service. Sep 9 00:46:16.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.675558 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:46:16.675686 systemd[1]: Stopped systemd-udevd.service. Sep 9 00:46:16.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.677962 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:46:16.678005 systemd[1]: Closed systemd-udevd-control.socket. Sep 9 00:46:16.678690 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:46:16.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.678718 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 9 00:46:16.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.680042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:46:16.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.680086 systemd[1]: Stopped dracut-pre-udev.service. Sep 9 00:46:16.681169 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:46:16.681208 systemd[1]: Stopped dracut-cmdline.service. Sep 9 00:46:16.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.682416 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:46:16.682450 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 9 00:46:16.684358 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 9 00:46:16.685490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:46:16.685544 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 9 00:46:16.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.690666 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:46:16.690756 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 9 00:46:16.696710 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:46:16.696810 systemd[1]: Stopped sysroot-boot.service. Sep 9 00:46:16.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.698163 systemd[1]: Reached target initrd-switch-root.target. Sep 9 00:46:16.699259 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:46:16.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:16.699322 systemd[1]: Stopped initrd-setup-root.service. Sep 9 00:46:16.700924 systemd[1]: Starting initrd-switch-root.service... Sep 9 00:46:16.707727 systemd[1]: Switching root. Sep 9 00:46:16.725010 systemd-journald[290]: Journal stopped Sep 9 00:46:18.758008 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Sep 9 00:46:18.758068 kernel: SELinux: Class mctp_socket not defined in policy. Sep 9 00:46:18.758080 kernel: SELinux: Class anon_inode not defined in policy. Sep 9 00:46:18.758093 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 9 00:46:18.758102 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:46:18.758112 kernel: SELinux: policy capability open_perms=1 Sep 9 00:46:18.758121 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:46:18.758132 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:46:18.758142 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:46:18.758152 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:46:18.758164 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:46:18.758175 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:46:18.758187 systemd[1]: Successfully loaded SELinux policy in 34.288ms. Sep 9 00:46:18.758206 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.906ms. Sep 9 00:46:18.758218 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:46:18.758229 systemd[1]: Detected virtualization kvm. Sep 9 00:46:18.758239 systemd[1]: Detected architecture arm64. Sep 9 00:46:18.758249 systemd[1]: Detected first boot. Sep 9 00:46:18.758260 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:46:18.758273 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 9 00:46:18.758283 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:46:18.758294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:46:18.758306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:46:18.758317 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:46:18.758329 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:46:18.758344 systemd[1]: Stopped initrd-switch-root.service. Sep 9 00:46:18.758355 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:46:18.758365 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 9 00:46:18.758376 systemd[1]: Created slice system-addon\x2drun.slice. Sep 9 00:46:18.758386 systemd[1]: Created slice system-getty.slice. Sep 9 00:46:18.758396 systemd[1]: Created slice system-modprobe.slice. Sep 9 00:46:18.758411 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 9 00:46:18.758422 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 9 00:46:18.758432 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 9 00:46:18.758442 systemd[1]: Created slice user.slice. Sep 9 00:46:18.758454 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:46:18.758476 systemd[1]: Started systemd-ask-password-wall.path. Sep 9 00:46:18.758487 systemd[1]: Set up automount boot.automount. Sep 9 00:46:18.758497 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 9 00:46:18.758507 systemd[1]: Stopped target initrd-switch-root.target. Sep 9 00:46:18.758519 systemd[1]: Stopped target initrd-fs.target. Sep 9 00:46:18.758530 systemd[1]: Stopped target initrd-root-fs.target. Sep 9 00:46:18.758540 systemd[1]: Reached target integritysetup.target. Sep 9 00:46:18.758551 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:46:18.758562 systemd[1]: Reached target remote-fs.target. Sep 9 00:46:18.758573 systemd[1]: Reached target slices.target. Sep 9 00:46:18.758583 systemd[1]: Reached target swap.target. Sep 9 00:46:18.758594 systemd[1]: Reached target torcx.target. Sep 9 00:46:18.758605 systemd[1]: Reached target veritysetup.target. Sep 9 00:46:18.758615 systemd[1]: Listening on systemd-coredump.socket. Sep 9 00:46:18.758625 systemd[1]: Listening on systemd-initctl.socket. Sep 9 00:46:18.758635 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:46:18.758650 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:46:18.758661 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:46:18.758800 systemd[1]: Listening on systemd-userdbd.socket. Sep 9 00:46:18.758845 systemd[1]: Mounting dev-hugepages.mount... Sep 9 00:46:18.758894 systemd[1]: Mounting dev-mqueue.mount... Sep 9 00:46:18.758939 systemd[1]: Mounting media.mount... Sep 9 00:46:18.758955 systemd[1]: Mounting sys-kernel-debug.mount... Sep 9 00:46:18.758967 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 9 00:46:18.758977 systemd[1]: Mounting tmp.mount... Sep 9 00:46:18.758988 systemd[1]: Starting flatcar-tmpfiles.service... Sep 9 00:46:18.758998 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:46:18.759008 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:46:18.759019 systemd[1]: Starting modprobe@configfs.service... Sep 9 00:46:18.759029 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:46:18.759039 systemd[1]: Starting modprobe@drm.service... Sep 9 00:46:18.759052 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:46:18.759063 systemd[1]: Starting modprobe@fuse.service... Sep 9 00:46:18.759073 systemd[1]: Starting modprobe@loop.service... Sep 9 00:46:18.759085 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:46:18.759096 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:46:18.759109 systemd[1]: Stopped systemd-fsck-root.service. Sep 9 00:46:18.759120 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:46:18.759130 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:46:18.759142 systemd[1]: Stopped systemd-journald.service. Sep 9 00:46:18.759152 kernel: fuse: init (API version 7.34) Sep 9 00:46:18.759162 systemd[1]: Starting systemd-journald.service... Sep 9 00:46:18.759172 kernel: loop: module loaded Sep 9 00:46:18.759182 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:46:18.759193 systemd[1]: Starting systemd-network-generator.service... Sep 9 00:46:18.759203 systemd[1]: Starting systemd-remount-fs.service... Sep 9 00:46:18.759213 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:46:18.759224 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:46:18.759234 systemd[1]: Stopped verity-setup.service. Sep 9 00:46:18.759246 systemd[1]: Mounted dev-hugepages.mount. Sep 9 00:46:18.759256 systemd[1]: Mounted dev-mqueue.mount. Sep 9 00:46:18.759266 systemd[1]: Mounted media.mount. Sep 9 00:46:18.759276 systemd[1]: Mounted sys-kernel-debug.mount. Sep 9 00:46:18.759288 systemd-journald[998]: Journal started Sep 9 00:46:18.759334 systemd-journald[998]: Runtime Journal (/run/log/journal/b431df1f6312474d9f5280109f1ebd4e) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:46:16.784000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:46:16.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:46:16.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:46:16.895000 audit: BPF prog-id=10 op=LOAD Sep 9 00:46:16.895000 audit: BPF prog-id=10 op=UNLOAD Sep 9 00:46:16.895000 audit: BPF prog-id=11 op=LOAD Sep 9 00:46:16.895000 audit: BPF prog-id=11 op=UNLOAD Sep 9 00:46:16.941000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 9 00:46:16.941000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:46:16.941000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:46:16.943000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 9 00:46:16.943000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:46:16.943000 audit: CWD cwd="/" Sep 9 00:46:16.943000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:46:16.943000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:46:16.943000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:46:18.652000 audit: BPF prog-id=12 op=LOAD Sep 9 00:46:18.652000 audit: BPF prog-id=3 op=UNLOAD Sep 9 00:46:18.652000 audit: BPF prog-id=13 op=LOAD Sep 9 00:46:18.652000 audit: BPF prog-id=14 op=LOAD Sep 9 00:46:18.652000 audit: BPF prog-id=4 op=UNLOAD Sep 9 00:46:18.652000 audit: BPF prog-id=5 op=UNLOAD Sep 9 00:46:18.653000 audit: BPF prog-id=15 op=LOAD Sep 9 00:46:18.653000 audit: BPF prog-id=12 op=UNLOAD Sep 9 00:46:18.653000 audit: BPF prog-id=16 op=LOAD Sep 9 00:46:18.653000 audit: BPF prog-id=17 op=LOAD Sep 9 00:46:18.653000 audit: BPF prog-id=13 op=UNLOAD Sep 9 00:46:18.653000 audit: BPF prog-id=14 op=UNLOAD Sep 9 00:46:18.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.667000 audit: BPF prog-id=15 op=UNLOAD Sep 9 00:46:18.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.737000 audit: BPF prog-id=18 op=LOAD Sep 9 00:46:18.737000 audit: BPF prog-id=19 op=LOAD Sep 9 00:46:18.737000 audit: BPF prog-id=20 op=LOAD Sep 9 00:46:18.737000 audit: BPF prog-id=16 op=UNLOAD Sep 9 00:46:18.737000 audit: BPF prog-id=17 op=UNLOAD Sep 9 00:46:18.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.756000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:46:18.756000 audit[998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffae15f50 a2=4000 a3=1 items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:46:18.756000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:46:18.651230 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:46:18.760623 systemd[1]: Started systemd-journald.service. Sep 9 00:46:16.940205 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:46:18.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.651241 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 9 00:46:16.940522 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:46:18.654204 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:46:16.940540 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:46:18.761242 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 9 00:46:16.940570 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 9 00:46:16.940579 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 9 00:46:16.940608 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 9 00:46:16.940620 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 9 00:46:16.940809 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 9 00:46:16.940841 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:46:16.940853 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:46:18.762132 systemd[1]: Mounted tmp.mount. Sep 9 00:46:16.941906 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 9 00:46:16.941943 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 9 00:46:16.941964 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 9 00:46:16.941978 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 9 00:46:16.941997 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 9 00:46:16.942010 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 9 00:46:18.393977 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:46:18.394241 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:46:18.394338 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:46:18.763038 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:46:18.394521 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:46:18.394570 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 9 00:46:18.394629 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:46:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 9 00:46:18.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.764030 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:46:18.764204 systemd[1]: Finished modprobe@configfs.service. Sep 9 00:46:18.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.765159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:46:18.765631 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:46:18.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.766510 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:46:18.766694 systemd[1]: Finished modprobe@drm.service. Sep 9 00:46:18.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.767645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:46:18.768688 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:46:18.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.770822 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:46:18.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.770982 systemd[1]: Finished modprobe@fuse.service. Sep 9 00:46:18.771822 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:46:18.771970 systemd[1]: Finished modprobe@loop.service. Sep 9 00:46:18.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.772821 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:46:18.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.773725 systemd[1]: Finished systemd-network-generator.service. Sep 9 00:46:18.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.775326 systemd[1]: Finished systemd-remount-fs.service. Sep 9 00:46:18.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.776545 systemd[1]: Finished flatcar-tmpfiles.service. Sep 9 00:46:18.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.778136 systemd[1]: Reached target network-pre.target. Sep 9 00:46:18.781879 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 9 00:46:18.783518 systemd[1]: Mounting sys-kernel-config.mount... Sep 9 00:46:18.784111 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:46:18.786576 systemd[1]: Starting systemd-hwdb-update.service... Sep 9 00:46:18.788436 systemd[1]: Starting systemd-journal-flush.service... Sep 9 00:46:18.789248 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:46:18.790420 systemd[1]: Starting systemd-random-seed.service... Sep 9 00:46:18.794865 systemd-journald[998]: Time spent on flushing to /var/log/journal/b431df1f6312474d9f5280109f1ebd4e is 22.233ms for 999 entries. Sep 9 00:46:18.794865 systemd-journald[998]: System Journal (/var/log/journal/b431df1f6312474d9f5280109f1ebd4e) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:46:18.829601 systemd-journald[998]: Received client request to flush runtime journal. Sep 9 00:46:18.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.791291 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:46:18.792584 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:46:18.794461 systemd[1]: Starting systemd-sysusers.service... Sep 9 00:46:18.799827 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 9 00:46:18.830851 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:46:18.801054 systemd[1]: Mounted sys-kernel-config.mount. Sep 9 00:46:18.804702 systemd[1]: Finished systemd-random-seed.service. Sep 9 00:46:18.805648 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:46:18.806449 systemd[1]: Reached target first-boot-complete.target. Sep 9 00:46:18.808174 systemd[1]: Starting systemd-udev-settle.service... Sep 9 00:46:18.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:18.816338 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:46:18.821094 systemd[1]: Finished systemd-sysusers.service. Sep 9 00:46:18.830395 systemd[1]: Finished systemd-journal-flush.service. Sep 9 00:46:19.157186 systemd[1]: Finished systemd-hwdb-update.service. Sep 9 00:46:19.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.158000 audit: BPF prog-id=21 op=LOAD Sep 9 00:46:19.158000 audit: BPF prog-id=22 op=LOAD Sep 9 00:46:19.158000 audit: BPF prog-id=7 op=UNLOAD Sep 9 00:46:19.158000 audit: BPF prog-id=8 op=UNLOAD Sep 9 00:46:19.159241 systemd[1]: Starting systemd-udevd.service... Sep 9 00:46:19.174221 systemd-udevd[1032]: Using default interface naming scheme 'v252'. Sep 9 00:46:19.188172 systemd[1]: Started systemd-udevd.service. Sep 9 00:46:19.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.189000 audit: BPF prog-id=23 op=LOAD Sep 9 00:46:19.190286 systemd[1]: Starting systemd-networkd.service... Sep 9 00:46:19.198000 audit: BPF prog-id=24 op=LOAD Sep 9 00:46:19.198000 audit: BPF prog-id=25 op=LOAD Sep 9 00:46:19.198000 audit: BPF prog-id=26 op=LOAD Sep 9 00:46:19.199963 systemd[1]: Starting systemd-userdbd.service... Sep 9 00:46:19.218830 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 9 00:46:19.233104 systemd[1]: Started systemd-userdbd.service. Sep 9 00:46:19.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.251997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:46:19.276584 systemd-networkd[1040]: lo: Link UP Sep 9 00:46:19.276592 systemd-networkd[1040]: lo: Gained carrier Sep 9 00:46:19.276970 systemd-networkd[1040]: Enumeration completed Sep 9 00:46:19.277063 systemd[1]: Started systemd-networkd.service. Sep 9 00:46:19.277075 systemd-networkd[1040]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:46:19.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.278715 systemd-networkd[1040]: eth0: Link UP Sep 9 00:46:19.278721 systemd-networkd[1040]: eth0: Gained carrier Sep 9 00:46:19.305590 systemd-networkd[1040]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:46:19.311851 systemd[1]: Finished systemd-udev-settle.service. Sep 9 00:46:19.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.313668 systemd[1]: Starting lvm2-activation-early.service... Sep 9 00:46:19.322172 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:46:19.351291 systemd[1]: Finished lvm2-activation-early.service. Sep 9 00:46:19.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.352148 systemd[1]: Reached target cryptsetup.target. Sep 9 00:46:19.353793 systemd[1]: Starting lvm2-activation.service... Sep 9 00:46:19.357090 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:46:19.392499 systemd[1]: Finished lvm2-activation.service. Sep 9 00:46:19.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.393276 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:46:19.393979 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:46:19.394009 systemd[1]: Reached target local-fs.target. Sep 9 00:46:19.394589 systemd[1]: Reached target machines.target. Sep 9 00:46:19.396401 systemd[1]: Starting ldconfig.service... Sep 9 00:46:19.397540 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.397596 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:46:19.399039 systemd[1]: Starting systemd-boot-update.service... Sep 9 00:46:19.401320 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 9 00:46:19.403903 systemd[1]: Starting systemd-machine-id-commit.service... Sep 9 00:46:19.406662 systemd[1]: Starting systemd-sysext.service... Sep 9 00:46:19.408162 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1068 (bootctl) Sep 9 00:46:19.411128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 9 00:46:19.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.414323 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 9 00:46:19.421379 systemd[1]: Unmounting usr-share-oem.mount... Sep 9 00:46:19.427788 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 9 00:46:19.428001 systemd[1]: Unmounted usr-share-oem.mount. Sep 9 00:46:19.485504 kernel: loop0: detected capacity change from 0 to 207008 Sep 9 00:46:19.487244 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:46:19.487831 systemd[1]: Finished systemd-machine-id-commit.service. Sep 9 00:46:19.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.496486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:46:19.499349 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) Sep 9 00:46:19.499349 systemd-fsck[1078]: /dev/vda1: 236 files, 117310/258078 clusters Sep 9 00:46:19.501808 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 9 00:46:19.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.504362 systemd[1]: Mounting boot.mount... Sep 9 00:46:19.512954 systemd[1]: Mounted boot.mount. Sep 9 00:46:19.517260 kernel: loop1: detected capacity change from 0 to 207008 Sep 9 00:46:19.520072 systemd[1]: Finished systemd-boot-update.service. Sep 9 00:46:19.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.521646 (sd-sysext)[1084]: Using extensions 'kubernetes'. Sep 9 00:46:19.521971 (sd-sysext)[1084]: Merged extensions into '/usr'. Sep 9 00:46:19.543796 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.545482 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:46:19.547597 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:46:19.549698 systemd[1]: Starting modprobe@loop.service... Sep 9 00:46:19.550680 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.550804 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:46:19.551880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:46:19.552070 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:46:19.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.553560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:46:19.553667 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:46:19.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.555133 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:46:19.555271 systemd[1]: Finished modprobe@loop.service. Sep 9 00:46:19.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.556954 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:46:19.557096 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.591828 ldconfig[1067]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:46:19.596590 systemd[1]: Finished ldconfig.service. Sep 9 00:46:19.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.755578 systemd[1]: Mounting usr-share-oem.mount... Sep 9 00:46:19.760692 systemd[1]: Mounted usr-share-oem.mount. Sep 9 00:46:19.762298 systemd[1]: Finished systemd-sysext.service. Sep 9 00:46:19.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.764085 systemd[1]: Starting ensure-sysext.service... Sep 9 00:46:19.765612 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 9 00:46:19.769655 systemd[1]: Reloading. Sep 9 00:46:19.776798 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 9 00:46:19.778875 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:46:19.783751 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:46:19.801281 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:46:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:46:19.801575 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:46:19Z" level=info msg="torcx already run" Sep 9 00:46:19.856242 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:46:19.856262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:46:19.871549 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:46:19.914000 audit: BPF prog-id=27 op=LOAD Sep 9 00:46:19.914000 audit: BPF prog-id=24 op=UNLOAD Sep 9 00:46:19.914000 audit: BPF prog-id=28 op=LOAD Sep 9 00:46:19.914000 audit: BPF prog-id=29 op=LOAD Sep 9 00:46:19.914000 audit: BPF prog-id=25 op=UNLOAD Sep 9 00:46:19.914000 audit: BPF prog-id=26 op=UNLOAD Sep 9 00:46:19.915000 audit: BPF prog-id=30 op=LOAD Sep 9 00:46:19.915000 audit: BPF prog-id=31 op=LOAD Sep 9 00:46:19.915000 audit: BPF prog-id=21 op=UNLOAD Sep 9 00:46:19.915000 audit: BPF prog-id=22 op=UNLOAD Sep 9 00:46:19.916000 audit: BPF prog-id=32 op=LOAD Sep 9 00:46:19.916000 audit: BPF prog-id=18 op=UNLOAD Sep 9 00:46:19.916000 audit: BPF prog-id=33 op=LOAD Sep 9 00:46:19.916000 audit: BPF prog-id=34 op=LOAD Sep 9 00:46:19.916000 audit: BPF prog-id=19 op=UNLOAD Sep 9 00:46:19.916000 audit: BPF prog-id=20 op=UNLOAD Sep 9 00:46:19.917000 audit: BPF prog-id=35 op=LOAD Sep 9 00:46:19.917000 audit: BPF prog-id=23 op=UNLOAD Sep 9 00:46:19.919526 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 9 00:46:19.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.923534 systemd[1]: Starting audit-rules.service... Sep 9 00:46:19.925345 systemd[1]: Starting clean-ca-certificates.service... Sep 9 00:46:19.927238 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 9 00:46:19.928000 audit: BPF prog-id=36 op=LOAD Sep 9 00:46:19.929459 systemd[1]: Starting systemd-resolved.service... Sep 9 00:46:19.930000 audit: BPF prog-id=37 op=LOAD Sep 9 00:46:19.931454 systemd[1]: Starting systemd-timesyncd.service... Sep 9 00:46:19.933065 systemd[1]: Starting systemd-update-utmp.service... Sep 9 00:46:19.934227 systemd[1]: Finished clean-ca-certificates.service. Sep 9 00:46:19.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.936865 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:46:19.936000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.940018 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.941174 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:46:19.942830 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:46:19.944537 systemd[1]: Starting modprobe@loop.service... Sep 9 00:46:19.945226 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.945348 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:46:19.945441 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:46:19.946271 systemd[1]: Finished systemd-update-utmp.service. Sep 9 00:46:19.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.947364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:46:19.947500 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:46:19.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.948515 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 9 00:46:19.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.949621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:46:19.949729 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:46:19.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.950727 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:46:19.950829 systemd[1]: Finished modprobe@loop.service. Sep 9 00:46:19.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.953658 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.954752 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:46:19.956532 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:46:19.958284 systemd[1]: Starting modprobe@loop.service... Sep 9 00:46:19.958965 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.959082 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:46:19.960299 systemd[1]: Starting systemd-update-done.service... Sep 9 00:46:19.961016 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:46:19.962315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:46:19.962428 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:46:19.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.967942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:46:19.968052 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:46:19.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.969276 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:46:19.969380 systemd[1]: Finished modprobe@loop.service. Sep 9 00:46:19.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.970430 systemd[1]: Finished systemd-update-done.service. Sep 9 00:46:19.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:46:19.971593 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:46:19.971688 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.973894 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.975099 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:46:19.976973 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:46:19.977026 systemd-timesyncd[1158]: Initial clock synchronization to Tue 2025-09-09 00:46:19.614444 UTC. Sep 9 00:46:19.977219 systemd[1]: Starting modprobe@drm.service... Sep 9 00:46:19.979075 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:46:19.980861 systemd[1]: Starting modprobe@loop.service... Sep 9 00:46:19.981515 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.981628 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:46:19.982757 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 9 00:46:19.983684 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:46:19.984044 systemd-resolved[1153]: Positive Trust Anchors: Sep 9 00:46:19.983000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 9 00:46:19.983000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe8874d60 a2=420 a3=0 items=0 ppid=1149 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:46:19.983000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 9 00:46:19.984404 augenrules[1181]: No rules Sep 9 00:46:19.984550 systemd[1]: Started systemd-timesyncd.service. Sep 9 00:46:19.984661 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:46:19.984738 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:46:19.985959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:46:19.986073 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:46:19.987160 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:46:19.987267 systemd[1]: Finished modprobe@drm.service. Sep 9 00:46:19.988401 systemd[1]: Finished audit-rules.service. Sep 9 00:46:19.989452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:46:19.989576 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:46:19.990625 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:46:19.990730 systemd[1]: Finished modprobe@loop.service. Sep 9 00:46:19.991942 systemd[1]: Reached target time-set.target. Sep 9 00:46:19.992602 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:46:19.992632 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:46:19.993531 systemd[1]: Finished ensure-sysext.service. Sep 9 00:46:19.994907 systemd-resolved[1153]: Defaulting to hostname 'linux'. Sep 9 00:46:19.996396 systemd[1]: Started systemd-resolved.service. Sep 9 00:46:19.997158 systemd[1]: Reached target network.target. Sep 9 00:46:19.997760 systemd[1]: Reached target nss-lookup.target. Sep 9 00:46:19.998327 systemd[1]: Reached target sysinit.target. Sep 9 00:46:19.998997 systemd[1]: Started motdgen.path. Sep 9 00:46:19.999527 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 9 00:46:20.000443 systemd[1]: Started logrotate.timer. Sep 9 00:46:20.001298 systemd[1]: Started mdadm.timer. Sep 9 00:46:20.001854 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 9 00:46:20.002429 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:46:20.002454 systemd[1]: Reached target paths.target. Sep 9 00:46:20.002978 systemd[1]: Reached target timers.target. Sep 9 00:46:20.003772 systemd[1]: Listening on dbus.socket. Sep 9 00:46:20.005156 systemd[1]: Starting docker.socket... Sep 9 00:46:20.007861 systemd[1]: Listening on sshd.socket. Sep 9 00:46:20.008549 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:46:20.008926 systemd[1]: Listening on docker.socket. Sep 9 00:46:20.009558 systemd[1]: Reached target sockets.target. Sep 9 00:46:20.010094 systemd[1]: Reached target basic.target. Sep 9 00:46:20.010742 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:46:20.010768 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:46:20.011625 systemd[1]: Starting containerd.service... Sep 9 00:46:20.013058 systemd[1]: Starting dbus.service... Sep 9 00:46:20.014456 systemd[1]: Starting enable-oem-cloudinit.service... Sep 9 00:46:20.016062 systemd[1]: Starting extend-filesystems.service... Sep 9 00:46:20.016858 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 9 00:46:20.018392 jq[1191]: false Sep 9 00:46:20.017839 systemd[1]: Starting motdgen.service... Sep 9 00:46:20.019387 systemd[1]: Starting prepare-helm.service... Sep 9 00:46:20.021038 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 9 00:46:20.022556 systemd[1]: Starting sshd-keygen.service... Sep 9 00:46:20.025638 systemd[1]: Starting systemd-logind.service... Sep 9 00:46:20.026192 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:46:20.026258 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:46:20.026673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:46:20.027446 systemd[1]: Starting update-engine.service... Sep 9 00:46:20.029494 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 9 00:46:20.032019 extend-filesystems[1192]: Found loop1 Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda1 Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda2 Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda3 Sep 9 00:46:20.032019 extend-filesystems[1192]: Found usr Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda4 Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda6 Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda7 Sep 9 00:46:20.032019 extend-filesystems[1192]: Found vda9 Sep 9 00:46:20.032019 extend-filesystems[1192]: Checking size of /dev/vda9 Sep 9 00:46:20.048631 jq[1209]: true Sep 9 00:46:20.031721 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:46:20.032021 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 9 00:46:20.032945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:46:20.048972 tar[1212]: linux-arm64/LICENSE Sep 9 00:46:20.048972 tar[1212]: linux-arm64/helm Sep 9 00:46:20.033098 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 9 00:46:20.049214 jq[1213]: true Sep 9 00:46:20.047992 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:46:20.048136 systemd[1]: Finished motdgen.service. Sep 9 00:46:20.055896 extend-filesystems[1192]: Resized partition /dev/vda9 Sep 9 00:46:20.064479 extend-filesystems[1232]: resize2fs 1.46.5 (30-Dec-2021) Sep 9 00:46:20.066581 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:46:20.075013 dbus-daemon[1190]: [system] SELinux support is enabled Sep 9 00:46:20.075155 systemd[1]: Started dbus.service. Sep 9 00:46:20.077865 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:46:20.077892 systemd[1]: Reached target system-config.target. Sep 9 00:46:20.078942 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:46:20.078961 systemd[1]: Reached target user-config.target. Sep 9 00:46:20.088587 update_engine[1205]: I0909 00:46:20.088382 1205 main.cc:92] Flatcar Update Engine starting Sep 9 00:46:20.095530 update_engine[1205]: I0909 00:46:20.090953 1205 update_check_scheduler.cc:74] Next update check in 10m7s Sep 9 00:46:20.090918 systemd[1]: Started update-engine.service. Sep 9 00:46:20.093445 systemd[1]: Started locksmithd.service. Sep 9 00:46:20.095972 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:46:20.096366 systemd-logind[1202]: New seat seat0. Sep 9 00:46:20.097972 systemd[1]: Started systemd-logind.service. Sep 9 00:46:20.106489 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:46:20.119609 extend-filesystems[1232]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:46:20.119609 extend-filesystems[1232]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:46:20.119609 extend-filesystems[1232]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:46:20.123633 extend-filesystems[1192]: Resized filesystem in /dev/vda9 Sep 9 00:46:20.125434 bash[1236]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:46:20.125547 env[1215]: time="2025-09-09T00:46:20.120805474Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 9 00:46:20.122893 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:46:20.123138 systemd[1]: Finished extend-filesystems.service. Sep 9 00:46:20.125552 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 9 00:46:20.140658 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:46:20.151892 env[1215]: time="2025-09-09T00:46:20.151853136Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:46:20.152012 env[1215]: time="2025-09-09T00:46:20.151992483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153092 env[1215]: time="2025-09-09T00:46:20.153062006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153133 env[1215]: time="2025-09-09T00:46:20.153092289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153304 env[1215]: time="2025-09-09T00:46:20.153278721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153332 env[1215]: time="2025-09-09T00:46:20.153302092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153332 env[1215]: time="2025-09-09T00:46:20.153315458Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 9 00:46:20.153332 env[1215]: time="2025-09-09T00:46:20.153324890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153409 env[1215]: time="2025-09-09T00:46:20.153393246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153648 env[1215]: time="2025-09-09T00:46:20.153628597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153783 env[1215]: time="2025-09-09T00:46:20.153761414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:46:20.153809 env[1215]: time="2025-09-09T00:46:20.153780966Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:46:20.153850 env[1215]: time="2025-09-09T00:46:20.153833397Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 9 00:46:20.153850 env[1215]: time="2025-09-09T00:46:20.153847985Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:46:20.157315 env[1215]: time="2025-09-09T00:46:20.157291446Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:46:20.157348 env[1215]: time="2025-09-09T00:46:20.157320506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:46:20.157348 env[1215]: time="2025-09-09T00:46:20.157332536Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:46:20.157393 env[1215]: time="2025-09-09T00:46:20.157365224Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157393 env[1215]: time="2025-09-09T00:46:20.157378590Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157393 env[1215]: time="2025-09-09T00:46:20.157390543Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157489 env[1215]: time="2025-09-09T00:46:20.157459013Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157805 env[1215]: time="2025-09-09T00:46:20.157784907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157832 env[1215]: time="2025-09-09T00:46:20.157809653Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157832 env[1215]: time="2025-09-09T00:46:20.157822636Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157876 env[1215]: time="2025-09-09T00:46:20.157834054Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.157876 env[1215]: time="2025-09-09T00:46:20.157845473Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:46:20.157965 env[1215]: time="2025-09-09T00:46:20.157944570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:46:20.158034 env[1215]: time="2025-09-09T00:46:20.158018730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:46:20.158269 env[1215]: time="2025-09-09T00:46:20.158251140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:46:20.158300 env[1215]: time="2025-09-09T00:46:20.158279056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158300 env[1215]: time="2025-09-09T00:46:20.158292460Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:46:20.158415 env[1215]: time="2025-09-09T00:46:20.158400645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158446 env[1215]: time="2025-09-09T00:46:20.158417333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158446 env[1215]: time="2025-09-09T00:46:20.158429821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158446 env[1215]: time="2025-09-09T00:46:20.158440055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158512 env[1215]: time="2025-09-09T00:46:20.158472667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158512 env[1215]: time="2025-09-09T00:46:20.158486720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158512 env[1215]: time="2025-09-09T00:46:20.158498291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158512 env[1215]: time="2025-09-09T00:46:20.158508984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158584 env[1215]: time="2025-09-09T00:46:20.158520975Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:46:20.158670 env[1215]: time="2025-09-09T00:46:20.158649400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158698 env[1215]: time="2025-09-09T00:46:20.158675597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158698 env[1215]: time="2025-09-09T00:46:20.158690528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158734 env[1215]: time="2025-09-09T00:46:20.158701832Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:46:20.158734 env[1215]: time="2025-09-09T00:46:20.158714243Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 9 00:46:20.158734 env[1215]: time="2025-09-09T00:46:20.158723942Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:46:20.158793 env[1215]: time="2025-09-09T00:46:20.158738644Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 9 00:46:20.158793 env[1215]: time="2025-09-09T00:46:20.158768469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:46:20.158984 env[1215]: time="2025-09-09T00:46:20.158938710Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:46:20.159528 env[1215]: time="2025-09-09T00:46:20.158991638Z" level=info msg="Connect containerd service" Sep 9 00:46:20.159528 env[1215]: time="2025-09-09T00:46:20.159024479Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:46:20.159640 env[1215]: time="2025-09-09T00:46:20.159612379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:46:20.159815 env[1215]: time="2025-09-09T00:46:20.159788004Z" level=info msg="Start subscribing containerd event" Sep 9 00:46:20.159843 env[1215]: time="2025-09-09T00:46:20.159828025Z" level=info msg="Start recovering state" Sep 9 00:46:20.159892 env[1215]: time="2025-09-09T00:46:20.159877936Z" level=info msg="Start event monitor" Sep 9 00:46:20.159926 env[1215]: time="2025-09-09T00:46:20.159898023Z" level=info msg="Start snapshots syncer" Sep 9 00:46:20.159926 env[1215]: time="2025-09-09T00:46:20.159907264Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:46:20.159926 env[1215]: time="2025-09-09T00:46:20.159914023Z" level=info msg="Start streaming server" Sep 9 00:46:20.160274 env[1215]: time="2025-09-09T00:46:20.160229912Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:46:20.160360 env[1215]: time="2025-09-09T00:46:20.160342031Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:46:20.160416 env[1215]: time="2025-09-09T00:46:20.160401833Z" level=info msg="containerd successfully booted in 0.050112s" Sep 9 00:46:20.160475 systemd[1]: Started containerd.service. Sep 9 00:46:20.451588 tar[1212]: linux-arm64/README.md Sep 9 00:46:20.455552 systemd[1]: Finished prepare-helm.service. Sep 9 00:46:20.455580 systemd-networkd[1040]: eth0: Gained IPv6LL Sep 9 00:46:20.460318 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 9 00:46:20.461401 systemd[1]: Reached target network-online.target. Sep 9 00:46:20.463562 systemd[1]: Starting kubelet.service... Sep 9 00:46:21.025769 systemd[1]: Started kubelet.service. Sep 9 00:46:21.375038 kubelet[1257]: E0909 00:46:21.374948 1257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:46:21.376876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:46:21.377001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:46:21.773314 sshd_keygen[1216]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:46:21.790196 systemd[1]: Finished sshd-keygen.service. Sep 9 00:46:21.792166 systemd[1]: Starting issuegen.service... Sep 9 00:46:21.796335 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:46:21.796505 systemd[1]: Finished issuegen.service. Sep 9 00:46:21.798380 systemd[1]: Starting systemd-user-sessions.service... Sep 9 00:46:21.804213 systemd[1]: Finished systemd-user-sessions.service. Sep 9 00:46:21.806150 systemd[1]: Started getty@tty1.service. Sep 9 00:46:21.807929 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 9 00:46:21.808736 systemd[1]: Reached target getty.target. Sep 9 00:46:21.809401 systemd[1]: Reached target multi-user.target. Sep 9 00:46:21.811172 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 9 00:46:21.816977 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 9 00:46:21.817121 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 9 00:46:21.818004 systemd[1]: Startup finished in 535ms (kernel) + 5.195s (initrd) + 5.069s (userspace) = 10.801s. Sep 9 00:46:24.243191 systemd[1]: Created slice system-sshd.slice. Sep 9 00:46:24.244273 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:34242.service. Sep 9 00:46:24.282218 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 34242 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:46:24.284034 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:46:24.291885 systemd[1]: Created slice user-500.slice. Sep 9 00:46:24.292919 systemd[1]: Starting user-runtime-dir@500.service... Sep 9 00:46:24.295507 systemd-logind[1202]: New session 1 of user core. Sep 9 00:46:24.301210 systemd[1]: Finished user-runtime-dir@500.service. Sep 9 00:46:24.302661 systemd[1]: Starting user@500.service... Sep 9 00:46:24.305238 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:46:24.362415 systemd[1282]: Queued start job for default target default.target. Sep 9 00:46:24.362874 systemd[1282]: Reached target paths.target. Sep 9 00:46:24.362904 systemd[1282]: Reached target sockets.target. Sep 9 00:46:24.362926 systemd[1282]: Reached target timers.target. Sep 9 00:46:24.362937 systemd[1282]: Reached target basic.target. Sep 9 00:46:24.362976 systemd[1282]: Reached target default.target. Sep 9 00:46:24.363000 systemd[1282]: Startup finished in 52ms. Sep 9 00:46:24.363062 systemd[1]: Started user@500.service. Sep 9 00:46:24.363961 systemd[1]: Started session-1.scope. Sep 9 00:46:24.414270 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:34256.service. Sep 9 00:46:24.446284 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 34256 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:46:24.447421 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:46:24.451861 systemd[1]: Started session-2.scope. Sep 9 00:46:24.452155 systemd-logind[1202]: New session 2 of user core. Sep 9 00:46:24.504765 sshd[1291]: pam_unix(sshd:session): session closed for user core Sep 9 00:46:24.507921 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:34256.service: Deactivated successfully. Sep 9 00:46:24.508482 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:46:24.508958 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:46:24.509919 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:34264.service. Sep 9 00:46:24.510608 systemd-logind[1202]: Removed session 2. Sep 9 00:46:24.541109 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 34264 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:46:24.542149 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:46:24.544984 systemd-logind[1202]: New session 3 of user core. Sep 9 00:46:24.545733 systemd[1]: Started session-3.scope. Sep 9 00:46:24.593040 sshd[1297]: pam_unix(sshd:session): session closed for user core Sep 9 00:46:24.597885 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:34272.service. Sep 9 00:46:24.598365 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:34264.service: Deactivated successfully. Sep 9 00:46:24.598975 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:46:24.599460 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:46:24.600194 systemd-logind[1202]: Removed session 3. Sep 9 00:46:24.630227 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 34272 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:46:24.631299 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:46:24.634149 systemd-logind[1202]: New session 4 of user core. Sep 9 00:46:24.634889 systemd[1]: Started session-4.scope. Sep 9 00:46:24.687169 sshd[1303]: pam_unix(sshd:session): session closed for user core Sep 9 00:46:24.689627 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:34272.service: Deactivated successfully. Sep 9 00:46:24.690173 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:46:24.690634 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:46:24.691663 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:34288.service. Sep 9 00:46:24.692286 systemd-logind[1202]: Removed session 4. Sep 9 00:46:24.723436 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 34288 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:46:24.724764 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:46:24.727734 systemd-logind[1202]: New session 5 of user core. Sep 9 00:46:24.728527 systemd[1]: Started session-5.scope. Sep 9 00:46:24.782562 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:46:24.782905 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:46:24.820316 systemd[1]: Starting docker.service... Sep 9 00:46:24.875075 env[1325]: time="2025-09-09T00:46:24.875012995Z" level=info msg="Starting up" Sep 9 00:46:24.876637 env[1325]: time="2025-09-09T00:46:24.876613138Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:46:24.876716 env[1325]: time="2025-09-09T00:46:24.876703279Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:46:24.876804 env[1325]: time="2025-09-09T00:46:24.876787657Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:46:24.876873 env[1325]: time="2025-09-09T00:46:24.876861288Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:46:24.878971 env[1325]: time="2025-09-09T00:46:24.878945465Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:46:24.878971 env[1325]: time="2025-09-09T00:46:24.878967192Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:46:24.879065 env[1325]: time="2025-09-09T00:46:24.878982456Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:46:24.879065 env[1325]: time="2025-09-09T00:46:24.878991801Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:46:24.883004 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3780273493-merged.mount: Deactivated successfully. Sep 9 00:46:25.085584 env[1325]: time="2025-09-09T00:46:25.085495781Z" level=info msg="Loading containers: start." Sep 9 00:46:25.198485 kernel: Initializing XFRM netlink socket Sep 9 00:46:25.220382 env[1325]: time="2025-09-09T00:46:25.220350684Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 9 00:46:25.270074 systemd-networkd[1040]: docker0: Link UP Sep 9 00:46:25.288711 env[1325]: time="2025-09-09T00:46:25.288677361Z" level=info msg="Loading containers: done." Sep 9 00:46:25.302206 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck358386360-merged.mount: Deactivated successfully. Sep 9 00:46:25.303651 env[1325]: time="2025-09-09T00:46:25.303615987Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:46:25.303888 env[1325]: time="2025-09-09T00:46:25.303869984Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 9 00:46:25.304034 env[1325]: time="2025-09-09T00:46:25.304019077Z" level=info msg="Daemon has completed initialization" Sep 9 00:46:25.323494 systemd[1]: Started docker.service. Sep 9 00:46:25.325116 env[1325]: time="2025-09-09T00:46:25.325079456Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:46:25.864332 env[1215]: time="2025-09-09T00:46:25.864288087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:46:26.433200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204055272.mount: Deactivated successfully. Sep 9 00:46:27.776172 env[1215]: time="2025-09-09T00:46:27.776122837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:27.777640 env[1215]: time="2025-09-09T00:46:27.777613553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:27.779544 env[1215]: time="2025-09-09T00:46:27.779519115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:27.781251 env[1215]: time="2025-09-09T00:46:27.781218885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:27.781929 env[1215]: time="2025-09-09T00:46:27.781898533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 00:46:27.782585 env[1215]: time="2025-09-09T00:46:27.782560266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:46:29.244366 env[1215]: time="2025-09-09T00:46:29.244322735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:29.245968 env[1215]: time="2025-09-09T00:46:29.245936252Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:29.248216 env[1215]: time="2025-09-09T00:46:29.248189810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:29.250114 env[1215]: time="2025-09-09T00:46:29.250086890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:29.250980 env[1215]: time="2025-09-09T00:46:29.250953917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 00:46:29.251927 env[1215]: time="2025-09-09T00:46:29.251903641Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:46:30.556934 env[1215]: time="2025-09-09T00:46:30.556888901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:30.558388 env[1215]: time="2025-09-09T00:46:30.558335214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:30.560807 env[1215]: time="2025-09-09T00:46:30.560781030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:30.562327 env[1215]: time="2025-09-09T00:46:30.562291923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:30.563214 env[1215]: time="2025-09-09T00:46:30.563169616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 00:46:30.563710 env[1215]: time="2025-09-09T00:46:30.563683102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:46:31.627709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:46:31.628824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587834787.mount: Deactivated successfully. Sep 9 00:46:31.629356 systemd[1]: Stopped kubelet.service. Sep 9 00:46:31.630665 systemd[1]: Starting kubelet.service... Sep 9 00:46:31.726067 systemd[1]: Started kubelet.service. Sep 9 00:46:31.762444 kubelet[1464]: E0909 00:46:31.762402 1464 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:46:31.765622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:46:31.765762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:46:32.185750 env[1215]: time="2025-09-09T00:46:32.185707291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:32.187082 env[1215]: time="2025-09-09T00:46:32.187050123Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:32.188682 env[1215]: time="2025-09-09T00:46:32.188647611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:32.189877 env[1215]: time="2025-09-09T00:46:32.189855090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:32.190197 env[1215]: time="2025-09-09T00:46:32.190173597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 00:46:32.190646 env[1215]: time="2025-09-09T00:46:32.190623533Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:46:32.699135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424971745.mount: Deactivated successfully. Sep 9 00:46:33.733416 env[1215]: time="2025-09-09T00:46:33.733358564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:33.734902 env[1215]: time="2025-09-09T00:46:33.734870117Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:33.736689 env[1215]: time="2025-09-09T00:46:33.736657172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:33.739170 env[1215]: time="2025-09-09T00:46:33.739139511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:33.739924 env[1215]: time="2025-09-09T00:46:33.739896617Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:46:33.740353 env[1215]: time="2025-09-09T00:46:33.740307668Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:46:34.185150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540509847.mount: Deactivated successfully. Sep 9 00:46:34.190162 env[1215]: time="2025-09-09T00:46:34.190112593Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:34.192536 env[1215]: time="2025-09-09T00:46:34.192496976Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:34.194317 env[1215]: time="2025-09-09T00:46:34.194285273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:34.195699 env[1215]: time="2025-09-09T00:46:34.195674539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:34.196262 env[1215]: time="2025-09-09T00:46:34.196235273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:46:34.197111 env[1215]: time="2025-09-09T00:46:34.197076832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:46:34.770704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337796224.mount: Deactivated successfully. Sep 9 00:46:37.095527 env[1215]: time="2025-09-09T00:46:37.095422352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:37.158523 env[1215]: time="2025-09-09T00:46:37.158450708Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:37.160490 env[1215]: time="2025-09-09T00:46:37.160455361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:37.163324 env[1215]: time="2025-09-09T00:46:37.163287722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:37.164383 env[1215]: time="2025-09-09T00:46:37.163796569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 00:46:42.016588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:46:42.016758 systemd[1]: Stopped kubelet.service. Sep 9 00:46:42.018138 systemd[1]: Starting kubelet.service... Sep 9 00:46:42.112346 systemd[1]: Started kubelet.service. Sep 9 00:46:42.145541 kubelet[1496]: E0909 00:46:42.145489 1496 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:46:42.147503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:46:42.147636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:46:43.943812 systemd[1]: Stopped kubelet.service. Sep 9 00:46:43.945777 systemd[1]: Starting kubelet.service... Sep 9 00:46:43.969165 systemd[1]: Reloading. Sep 9 00:46:44.021011 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-09-09T00:46:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:46:44.021043 /usr/lib/systemd/system-generators/torcx-generator[1530]: time="2025-09-09T00:46:44Z" level=info msg="torcx already run" Sep 9 00:46:44.202809 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:46:44.202828 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:46:44.218152 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:46:44.283829 systemd[1]: Started kubelet.service. Sep 9 00:46:44.287278 systemd[1]: Stopping kubelet.service... Sep 9 00:46:44.287937 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:46:44.288120 systemd[1]: Stopped kubelet.service. Sep 9 00:46:44.289512 systemd[1]: Starting kubelet.service... Sep 9 00:46:44.379350 systemd[1]: Started kubelet.service. Sep 9 00:46:44.410415 kubelet[1579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:46:44.410415 kubelet[1579]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:46:44.410415 kubelet[1579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:46:44.410731 kubelet[1579]: I0909 00:46:44.410488 1579 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:46:45.176186 kubelet[1579]: I0909 00:46:45.176133 1579 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:46:45.176186 kubelet[1579]: I0909 00:46:45.176170 1579 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:46:45.176462 kubelet[1579]: I0909 00:46:45.176434 1579 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:46:45.198575 kubelet[1579]: E0909 00:46:45.198537 1579 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:46:45.199586 kubelet[1579]: I0909 00:46:45.199564 1579 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:46:45.207225 kubelet[1579]: E0909 00:46:45.207171 1579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:46:45.207225 kubelet[1579]: I0909 00:46:45.207224 1579 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:46:45.209707 kubelet[1579]: I0909 00:46:45.209693 1579 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:46:45.210355 kubelet[1579]: I0909 00:46:45.210325 1579 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:46:45.210516 kubelet[1579]: I0909 00:46:45.210358 1579 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:46:45.210605 kubelet[1579]: I0909 00:46:45.210598 1579 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:46:45.210635 kubelet[1579]: I0909 00:46:45.210610 1579 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:46:45.210794 kubelet[1579]: I0909 00:46:45.210781 1579 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:46:45.213833 kubelet[1579]: I0909 00:46:45.213810 1579 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:46:45.213833 kubelet[1579]: I0909 00:46:45.213833 1579 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:46:45.213901 kubelet[1579]: I0909 00:46:45.213851 1579 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:46:45.213901 kubelet[1579]: I0909 00:46:45.213869 1579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:46:45.228793 kubelet[1579]: W0909 00:46:45.228742 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 9 00:46:45.228838 kubelet[1579]: E0909 00:46:45.228800 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:46:45.229082 kubelet[1579]: W0909 00:46:45.229042 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 9 00:46:45.229127 kubelet[1579]: E0909 00:46:45.229080 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:46:45.231810 kubelet[1579]: I0909 00:46:45.231778 1579 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:46:45.232494 kubelet[1579]: I0909 00:46:45.232459 1579 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:46:45.232615 kubelet[1579]: W0909 00:46:45.232592 1579 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:46:45.233312 kubelet[1579]: I0909 00:46:45.233300 1579 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:46:45.233366 kubelet[1579]: I0909 00:46:45.233330 1579 server.go:1287] "Started kubelet" Sep 9 00:46:45.233435 kubelet[1579]: I0909 00:46:45.233401 1579 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:46:45.233704 kubelet[1579]: I0909 00:46:45.233660 1579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:46:45.234297 kubelet[1579]: I0909 00:46:45.234266 1579 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:46:45.234505 kubelet[1579]: I0909 00:46:45.234486 1579 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:46:45.236034 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 9 00:46:45.236262 kubelet[1579]: I0909 00:46:45.236242 1579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:46:45.237499 kubelet[1579]: E0909 00:46:45.237476 1579 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:46:45.237499 kubelet[1579]: E0909 00:46:45.236506 1579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186376b65dbb60c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:46:45.233311939 +0000 UTC m=+0.850860731,LastTimestamp:2025-09-09 00:46:45.233311939 +0000 UTC m=+0.850860731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:46:45.237634 kubelet[1579]: I0909 00:46:45.236321 1579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:46:45.238095 kubelet[1579]: I0909 00:46:45.238071 1579 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:46:45.238148 kubelet[1579]: E0909 00:46:45.238096 1579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:46:45.238190 kubelet[1579]: I0909 00:46:45.238155 1579 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:46:45.238216 kubelet[1579]: I0909 00:46:45.238194 1579 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:46:45.238573 kubelet[1579]: I0909 00:46:45.238533 1579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:46:45.238713 kubelet[1579]: E0909 00:46:45.238689 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Sep 9 00:46:45.238827 kubelet[1579]: W0909 00:46:45.238788 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 9 00:46:45.238886 kubelet[1579]: E0909 00:46:45.238833 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:46:45.239386 kubelet[1579]: I0909 00:46:45.239357 1579 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:46:45.239494 kubelet[1579]: I0909 00:46:45.239480 1579 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:46:45.249870 kubelet[1579]: I0909 00:46:45.249854 1579 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:46:45.249870 kubelet[1579]: I0909 00:46:45.249867 1579 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:46:45.249966 kubelet[1579]: I0909 00:46:45.249881 1579 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:46:45.338553 kubelet[1579]: E0909 00:46:45.338491 1579 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:46:45.343888 kubelet[1579]: I0909 00:46:45.343860 1579 policy_none.go:49] "None policy: Start" Sep 9 00:46:45.343888 kubelet[1579]: I0909 00:46:45.343884 1579 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:46:45.343991 kubelet[1579]: I0909 00:46:45.343896 1579 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:46:45.348767 systemd[1]: Created slice kubepods.slice. Sep 9 00:46:45.350618 kubelet[1579]: I0909 00:46:45.350537 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:46:45.351536 kubelet[1579]: I0909 00:46:45.351486 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:46:45.351536 kubelet[1579]: I0909 00:46:45.351508 1579 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:46:45.351536 kubelet[1579]: I0909 00:46:45.351528 1579 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:46:45.351536 kubelet[1579]: I0909 00:46:45.351536 1579 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:46:45.351679 kubelet[1579]: E0909 00:46:45.351665 1579 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:46:45.353297 kubelet[1579]: W0909 00:46:45.353132 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 9 00:46:45.353297 kubelet[1579]: E0909 00:46:45.353180 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:46:45.353939 systemd[1]: Created slice kubepods-besteffort.slice. Sep 9 00:46:45.367435 systemd[1]: Created slice kubepods-burstable.slice. Sep 9 00:46:45.368302 kubelet[1579]: I0909 00:46:45.368286 1579 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:46:45.368653 kubelet[1579]: I0909 00:46:45.368640 1579 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:46:45.368772 kubelet[1579]: I0909 00:46:45.368743 1579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:46:45.368986 kubelet[1579]: I0909 00:46:45.368974 1579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:46:45.369514 kubelet[1579]: E0909 00:46:45.369498 1579 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:46:45.369581 kubelet[1579]: E0909 00:46:45.369530 1579 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:46:45.440485 kubelet[1579]: E0909 00:46:45.439098 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Sep 9 00:46:45.458194 systemd[1]: Created slice kubepods-burstable-pod4f526379a0ec1f5f101d05961c403bd4.slice. Sep 9 00:46:45.469789 kubelet[1579]: I0909 00:46:45.469749 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:46:45.470177 kubelet[1579]: E0909 00:46:45.470140 1579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 9 00:46:45.484510 kubelet[1579]: E0909 00:46:45.484481 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:46:45.486730 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:46:45.488126 kubelet[1579]: E0909 00:46:45.488102 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:46:45.489693 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:46:45.491002 kubelet[1579]: E0909 00:46:45.490979 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:46:45.639611 kubelet[1579]: I0909 00:46:45.639577 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:45.639658 kubelet[1579]: I0909 00:46:45.639614 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:45.639658 kubelet[1579]: I0909 00:46:45.639636 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f526379a0ec1f5f101d05961c403bd4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f526379a0ec1f5f101d05961c403bd4\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:45.639658 kubelet[1579]: I0909 00:46:45.639651 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:45.639951 kubelet[1579]: I0909 00:46:45.639922 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:45.639989 kubelet[1579]: I0909 00:46:45.639951 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:45.639989 kubelet[1579]: I0909 00:46:45.639969 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:45.639989 kubelet[1579]: I0909 00:46:45.639988 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f526379a0ec1f5f101d05961c403bd4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f526379a0ec1f5f101d05961c403bd4\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:45.640088 kubelet[1579]: I0909 00:46:45.640020 1579 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f526379a0ec1f5f101d05961c403bd4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f526379a0ec1f5f101d05961c403bd4\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:45.671616 kubelet[1579]: I0909 00:46:45.671588 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:46:45.671935 kubelet[1579]: E0909 00:46:45.671913 1579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 9 00:46:45.786334 kubelet[1579]: E0909 00:46:45.785831 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:45.786610 env[1215]: time="2025-09-09T00:46:45.786568809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f526379a0ec1f5f101d05961c403bd4,Namespace:kube-system,Attempt:0,}" Sep 9 00:46:45.788916 kubelet[1579]: E0909 00:46:45.788887 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:45.789483 env[1215]: time="2025-09-09T00:46:45.789274890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:46:45.791481 kubelet[1579]: E0909 00:46:45.791442 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:45.791796 env[1215]: time="2025-09-09T00:46:45.791767634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:46:45.840305 kubelet[1579]: E0909 00:46:45.840260 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Sep 9 00:46:46.073631 kubelet[1579]: I0909 00:46:46.073356 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:46:46.073714 kubelet[1579]: E0909 00:46:46.073664 1579 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Sep 9 00:46:46.144936 kubelet[1579]: W0909 00:46:46.144881 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 9 00:46:46.145025 kubelet[1579]: E0909 00:46:46.144939 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:46:46.277792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208536044.mount: Deactivated successfully. Sep 9 00:46:46.281910 env[1215]: time="2025-09-09T00:46:46.281868025Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.283703 env[1215]: time="2025-09-09T00:46:46.283672042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.284741 env[1215]: time="2025-09-09T00:46:46.284704827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.285497 env[1215]: time="2025-09-09T00:46:46.285459324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.286708 env[1215]: time="2025-09-09T00:46:46.286683638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.290646 env[1215]: time="2025-09-09T00:46:46.290614738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.292300 env[1215]: time="2025-09-09T00:46:46.292270365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.294730 env[1215]: time="2025-09-09T00:46:46.294695947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.296179 env[1215]: time="2025-09-09T00:46:46.296151296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.297716 env[1215]: time="2025-09-09T00:46:46.297687850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.300246 env[1215]: time="2025-09-09T00:46:46.300185490Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.301407 env[1215]: time="2025-09-09T00:46:46.301373696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:46:46.324832 env[1215]: time="2025-09-09T00:46:46.324543284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:46:46.324832 env[1215]: time="2025-09-09T00:46:46.324585584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:46:46.324832 env[1215]: time="2025-09-09T00:46:46.324595650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:46:46.324832 env[1215]: time="2025-09-09T00:46:46.324760538Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf8d90242ac6cc12eb38b0ff861a54245b10c37c9cb05bc96ac9cb3d91f71d94 pid=1629 runtime=io.containerd.runc.v2 Sep 9 00:46:46.325329 env[1215]: time="2025-09-09T00:46:46.325125383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:46:46.325549 env[1215]: time="2025-09-09T00:46:46.325159056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:46:46.325549 env[1215]: time="2025-09-09T00:46:46.325518909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:46:46.325992 env[1215]: time="2025-09-09T00:46:46.325946506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e0a2a9bbdbcbd9b4fc8b0987fc5715d698de351a3a38fbf9ba5642281647f6d pid=1625 runtime=io.containerd.runc.v2 Sep 9 00:46:46.327881 env[1215]: time="2025-09-09T00:46:46.327802730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:46:46.327881 env[1215]: time="2025-09-09T00:46:46.327836722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:46:46.328040 env[1215]: time="2025-09-09T00:46:46.327871234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:46:46.328256 env[1215]: time="2025-09-09T00:46:46.328222299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7db7e3a3c8f7551d52fe70f9e229ffdac4eb86ac6eef9e7214c6326aaf8dd5c7 pid=1646 runtime=io.containerd.runc.v2 Sep 9 00:46:46.336887 systemd[1]: Started cri-containerd-cf8d90242ac6cc12eb38b0ff861a54245b10c37c9cb05bc96ac9cb3d91f71d94.scope. Sep 9 00:46:46.340148 systemd[1]: Started cri-containerd-6e0a2a9bbdbcbd9b4fc8b0987fc5715d698de351a3a38fbf9ba5642281647f6d.scope. Sep 9 00:46:46.357487 systemd[1]: Started cri-containerd-7db7e3a3c8f7551d52fe70f9e229ffdac4eb86ac6eef9e7214c6326aaf8dd5c7.scope. Sep 9 00:46:46.382473 env[1215]: time="2025-09-09T00:46:46.381127462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e0a2a9bbdbcbd9b4fc8b0987fc5715d698de351a3a38fbf9ba5642281647f6d\"" Sep 9 00:46:46.383721 kubelet[1579]: E0909 00:46:46.383118 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:46.386286 env[1215]: time="2025-09-09T00:46:46.386242134Z" level=info msg="CreateContainer within sandbox \"6e0a2a9bbdbcbd9b4fc8b0987fc5715d698de351a3a38fbf9ba5642281647f6d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:46:46.388899 env[1215]: time="2025-09-09T00:46:46.388856809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf8d90242ac6cc12eb38b0ff861a54245b10c37c9cb05bc96ac9cb3d91f71d94\"" Sep 9 00:46:46.389635 kubelet[1579]: E0909 00:46:46.389441 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:46.390815 env[1215]: time="2025-09-09T00:46:46.390774307Z" level=info msg="CreateContainer within sandbox \"cf8d90242ac6cc12eb38b0ff861a54245b10c37c9cb05bc96ac9cb3d91f71d94\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:46:46.395321 env[1215]: time="2025-09-09T00:46:46.393935572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f526379a0ec1f5f101d05961c403bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7db7e3a3c8f7551d52fe70f9e229ffdac4eb86ac6eef9e7214c6326aaf8dd5c7\"" Sep 9 00:46:46.395734 kubelet[1579]: E0909 00:46:46.395581 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:46.398247 env[1215]: time="2025-09-09T00:46:46.398216539Z" level=info msg="CreateContainer within sandbox \"7db7e3a3c8f7551d52fe70f9e229ffdac4eb86ac6eef9e7214c6326aaf8dd5c7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:46:46.401460 env[1215]: time="2025-09-09T00:46:46.401405285Z" level=info msg="CreateContainer within sandbox \"6e0a2a9bbdbcbd9b4fc8b0987fc5715d698de351a3a38fbf9ba5642281647f6d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4877674c31e628cc432d3e59aed420c9373b8baea9999ea899ba0758dad73aee\"" Sep 9 00:46:46.402120 env[1215]: time="2025-09-09T00:46:46.402092436Z" level=info msg="StartContainer for \"4877674c31e628cc432d3e59aed420c9373b8baea9999ea899ba0758dad73aee\"" Sep 9 00:46:46.404533 env[1215]: time="2025-09-09T00:46:46.404497247Z" level=info msg="CreateContainer within sandbox \"cf8d90242ac6cc12eb38b0ff861a54245b10c37c9cb05bc96ac9cb3d91f71d94\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa863adde08410b66a573faba3142d311addd97c64b31b77a783217bb510ede1\"" Sep 9 00:46:46.405063 env[1215]: time="2025-09-09T00:46:46.405027340Z" level=info msg="StartContainer for \"fa863adde08410b66a573faba3142d311addd97c64b31b77a783217bb510ede1\"" Sep 9 00:46:46.410787 env[1215]: time="2025-09-09T00:46:46.410747239Z" level=info msg="CreateContainer within sandbox \"7db7e3a3c8f7551d52fe70f9e229ffdac4eb86ac6eef9e7214c6326aaf8dd5c7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a8b2b455defed130bcb75a70c8e172391852745ffa0216a4d41e71af7591aec2\"" Sep 9 00:46:46.411213 env[1215]: time="2025-09-09T00:46:46.411179031Z" level=info msg="StartContainer for \"a8b2b455defed130bcb75a70c8e172391852745ffa0216a4d41e71af7591aec2\"" Sep 9 00:46:46.421906 systemd[1]: Started cri-containerd-4877674c31e628cc432d3e59aed420c9373b8baea9999ea899ba0758dad73aee.scope. Sep 9 00:46:46.429188 systemd[1]: Started cri-containerd-a8b2b455defed130bcb75a70c8e172391852745ffa0216a4d41e71af7591aec2.scope. Sep 9 00:46:46.430032 systemd[1]: Started cri-containerd-fa863adde08410b66a573faba3142d311addd97c64b31b77a783217bb510ede1.scope. Sep 9 00:46:46.449371 kubelet[1579]: W0909 00:46:46.449271 1579 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Sep 9 00:46:46.449371 kubelet[1579]: E0909 00:46:46.449332 1579 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:46:46.470017 env[1215]: time="2025-09-09T00:46:46.469972296Z" level=info msg="StartContainer for \"4877674c31e628cc432d3e59aed420c9373b8baea9999ea899ba0758dad73aee\" returns successfully" Sep 9 00:46:46.476698 env[1215]: time="2025-09-09T00:46:46.476330255Z" level=info msg="StartContainer for \"fa863adde08410b66a573faba3142d311addd97c64b31b77a783217bb510ede1\" returns successfully" Sep 9 00:46:46.494792 env[1215]: time="2025-09-09T00:46:46.494750536Z" level=info msg="StartContainer for \"a8b2b455defed130bcb75a70c8e172391852745ffa0216a4d41e71af7591aec2\" returns successfully" Sep 9 00:46:46.875312 kubelet[1579]: I0909 00:46:46.875276 1579 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:46:47.357424 kubelet[1579]: E0909 00:46:47.357380 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:46:47.357566 kubelet[1579]: E0909 00:46:47.357531 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:47.359161 kubelet[1579]: E0909 00:46:47.359133 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:46:47.359260 kubelet[1579]: E0909 00:46:47.359233 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:47.360712 kubelet[1579]: E0909 00:46:47.360686 1579 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:46:47.360784 kubelet[1579]: E0909 00:46:47.360776 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:48.045916 kubelet[1579]: E0909 00:46:48.045875 1579 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:46:48.101506 kubelet[1579]: I0909 00:46:48.101459 1579 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:46:48.139252 kubelet[1579]: I0909 00:46:48.139213 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:48.144306 kubelet[1579]: E0909 00:46:48.144275 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:48.144306 kubelet[1579]: I0909 00:46:48.144304 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:48.146696 kubelet[1579]: E0909 00:46:48.146668 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:48.146696 kubelet[1579]: I0909 00:46:48.146691 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:48.148234 kubelet[1579]: E0909 00:46:48.148208 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:48.215377 kubelet[1579]: I0909 00:46:48.215349 1579 apiserver.go:52] "Watching apiserver" Sep 9 00:46:48.239105 kubelet[1579]: I0909 00:46:48.239073 1579 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:46:48.361689 kubelet[1579]: I0909 00:46:48.361581 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:48.361795 kubelet[1579]: I0909 00:46:48.361710 1579 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:48.363875 kubelet[1579]: E0909 00:46:48.363849 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:48.364012 kubelet[1579]: E0909 00:46:48.363996 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:48.364966 kubelet[1579]: E0909 00:46:48.364945 1579 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:48.365080 kubelet[1579]: E0909 00:46:48.365065 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:49.819261 systemd[1]: Reloading. Sep 9 00:46:49.864237 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-09-09T00:46:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:46:49.864585 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-09-09T00:46:49Z" level=info msg="torcx already run" Sep 9 00:46:49.917686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:46:49.917710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:46:49.933260 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:46:50.011798 systemd[1]: Stopping kubelet.service... Sep 9 00:46:50.036001 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:46:50.036175 systemd[1]: Stopped kubelet.service. Sep 9 00:46:50.036216 systemd[1]: kubelet.service: Consumed 1.191s CPU time. Sep 9 00:46:50.037664 systemd[1]: Starting kubelet.service... Sep 9 00:46:50.130393 systemd[1]: Started kubelet.service. Sep 9 00:46:50.164114 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:46:50.164114 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:46:50.164114 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:46:50.164114 kubelet[1916]: I0909 00:46:50.163364 1916 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:46:50.171303 kubelet[1916]: I0909 00:46:50.171266 1916 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:46:50.171303 kubelet[1916]: I0909 00:46:50.171294 1916 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:46:50.171541 kubelet[1916]: I0909 00:46:50.171527 1916 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:46:50.172672 kubelet[1916]: I0909 00:46:50.172657 1916 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:46:50.174781 kubelet[1916]: I0909 00:46:50.174762 1916 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:46:50.177847 kubelet[1916]: E0909 00:46:50.177824 1916 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:46:50.177951 kubelet[1916]: I0909 00:46:50.177938 1916 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:46:50.180663 kubelet[1916]: I0909 00:46:50.180638 1916 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:46:50.180987 kubelet[1916]: I0909 00:46:50.180960 1916 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:46:50.181373 kubelet[1916]: I0909 00:46:50.181096 1916 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:46:50.181543 kubelet[1916]: I0909 00:46:50.181529 1916 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:46:50.181613 kubelet[1916]: I0909 00:46:50.181604 1916 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:46:50.181719 kubelet[1916]: I0909 00:46:50.181708 1916 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:46:50.182095 kubelet[1916]: I0909 00:46:50.182076 1916 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:46:50.182143 kubelet[1916]: I0909 00:46:50.182102 1916 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:46:50.182143 kubelet[1916]: I0909 00:46:50.182120 1916 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:46:50.182143 kubelet[1916]: I0909 00:46:50.182129 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:46:50.182779 kubelet[1916]: I0909 00:46:50.182746 1916 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:46:50.183432 kubelet[1916]: I0909 00:46:50.183407 1916 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:46:50.184066 kubelet[1916]: I0909 00:46:50.184041 1916 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:46:50.184189 kubelet[1916]: I0909 00:46:50.184177 1916 server.go:1287] "Started kubelet" Sep 9 00:46:50.185129 kubelet[1916]: I0909 00:46:50.185030 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:46:50.185366 kubelet[1916]: I0909 00:46:50.185329 1916 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:46:50.185442 kubelet[1916]: I0909 00:46:50.185387 1916 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:46:50.186218 kubelet[1916]: I0909 00:46:50.186195 1916 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:46:50.186851 kubelet[1916]: I0909 00:46:50.186835 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:46:50.190363 kubelet[1916]: E0909 00:46:50.190338 1916 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:46:50.190648 kubelet[1916]: E0909 00:46:50.190631 1916 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:46:50.190762 kubelet[1916]: I0909 00:46:50.190749 1916 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:46:50.190975 kubelet[1916]: I0909 00:46:50.190956 1916 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:46:50.191152 kubelet[1916]: I0909 00:46:50.191138 1916 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:46:50.194687 kubelet[1916]: I0909 00:46:50.194655 1916 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:46:50.194687 kubelet[1916]: I0909 00:46:50.194682 1916 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:46:50.194791 kubelet[1916]: I0909 00:46:50.194769 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:46:50.201251 kubelet[1916]: I0909 00:46:50.201195 1916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:46:50.218929 kubelet[1916]: I0909 00:46:50.218885 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:46:50.220269 kubelet[1916]: I0909 00:46:50.220244 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:46:50.220269 kubelet[1916]: I0909 00:46:50.220273 1916 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:46:50.220420 kubelet[1916]: I0909 00:46:50.220295 1916 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:46:50.220420 kubelet[1916]: I0909 00:46:50.220303 1916 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:46:50.220420 kubelet[1916]: E0909 00:46:50.220346 1916 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:46:50.246021 kubelet[1916]: I0909 00:46:50.245936 1916 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:46:50.246177 kubelet[1916]: I0909 00:46:50.246160 1916 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:46:50.246246 kubelet[1916]: I0909 00:46:50.246236 1916 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:46:50.246440 kubelet[1916]: I0909 00:46:50.246411 1916 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:46:50.246534 kubelet[1916]: I0909 00:46:50.246509 1916 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:46:50.246587 kubelet[1916]: I0909 00:46:50.246577 1916 policy_none.go:49] "None policy: Start" Sep 9 00:46:50.246652 kubelet[1916]: I0909 00:46:50.246641 1916 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:46:50.246711 kubelet[1916]: I0909 00:46:50.246702 1916 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:46:50.246873 kubelet[1916]: I0909 00:46:50.246858 1916 state_mem.go:75] "Updated machine memory state" Sep 9 00:46:50.250172 kubelet[1916]: I0909 00:46:50.250151 1916 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:46:50.250428 kubelet[1916]: I0909 00:46:50.250398 1916 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:46:50.250537 kubelet[1916]: I0909 00:46:50.250506 1916 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:46:50.250838 kubelet[1916]: I0909 00:46:50.250820 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:46:50.251811 kubelet[1916]: E0909 00:46:50.251780 1916 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:46:50.320925 kubelet[1916]: I0909 00:46:50.320879 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:50.321119 kubelet[1916]: I0909 00:46:50.321103 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:50.321295 kubelet[1916]: I0909 00:46:50.321261 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:50.353827 kubelet[1916]: I0909 00:46:50.353803 1916 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:46:50.359445 kubelet[1916]: I0909 00:46:50.359419 1916 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:46:50.359593 kubelet[1916]: I0909 00:46:50.359580 1916 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:46:50.392983 kubelet[1916]: I0909 00:46:50.392953 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:50.393086 kubelet[1916]: I0909 00:46:50.392996 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:50.393086 kubelet[1916]: I0909 00:46:50.393031 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:50.393086 kubelet[1916]: I0909 00:46:50.393061 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:50.393167 kubelet[1916]: I0909 00:46:50.393089 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f526379a0ec1f5f101d05961c403bd4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f526379a0ec1f5f101d05961c403bd4\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:50.393167 kubelet[1916]: I0909 00:46:50.393116 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f526379a0ec1f5f101d05961c403bd4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f526379a0ec1f5f101d05961c403bd4\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:50.393167 kubelet[1916]: I0909 00:46:50.393144 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:50.393232 kubelet[1916]: I0909 00:46:50.393169 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f526379a0ec1f5f101d05961c403bd4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f526379a0ec1f5f101d05961c403bd4\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:46:50.393232 kubelet[1916]: I0909 00:46:50.393190 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:46:50.625820 kubelet[1916]: E0909 00:46:50.625785 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:50.626809 kubelet[1916]: E0909 00:46:50.626785 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:50.626888 kubelet[1916]: E0909 00:46:50.626835 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:50.815900 sudo[1951]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:46:50.816551 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 9 00:46:51.182520 kubelet[1916]: I0909 00:46:51.182485 1916 apiserver.go:52] "Watching apiserver" Sep 9 00:46:51.191144 kubelet[1916]: I0909 00:46:51.191119 1916 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:46:51.233738 kubelet[1916]: I0909 00:46:51.233702 1916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:51.233892 kubelet[1916]: E0909 00:46:51.233869 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:51.233934 kubelet[1916]: E0909 00:46:51.233920 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:51.239428 kubelet[1916]: E0909 00:46:51.238991 1916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:46:51.239428 kubelet[1916]: E0909 00:46:51.239128 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:51.255160 kubelet[1916]: I0909 00:46:51.255104 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.255090108 podStartE2EDuration="1.255090108s" podCreationTimestamp="2025-09-09 00:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:46:51.254101422 +0000 UTC m=+1.120089044" watchObservedRunningTime="2025-09-09 00:46:51.255090108 +0000 UTC m=+1.121077730" Sep 9 00:46:51.269460 kubelet[1916]: I0909 00:46:51.269402 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.26938822 podStartE2EDuration="1.26938822s" podCreationTimestamp="2025-09-09 00:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:46:51.262333236 +0000 UTC m=+1.128320858" watchObservedRunningTime="2025-09-09 00:46:51.26938822 +0000 UTC m=+1.135375842" Sep 9 00:46:51.270342 sudo[1951]: pam_unix(sudo:session): session closed for user root Sep 9 00:46:51.277131 kubelet[1916]: I0909 00:46:51.277058 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.277042091 podStartE2EDuration="1.277042091s" podCreationTimestamp="2025-09-09 00:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:46:51.269614537 +0000 UTC m=+1.135602159" watchObservedRunningTime="2025-09-09 00:46:51.277042091 +0000 UTC m=+1.143029673" Sep 9 00:46:52.235433 kubelet[1916]: E0909 00:46:52.235395 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:52.235769 kubelet[1916]: E0909 00:46:52.235495 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:53.219283 sudo[1313]: pam_unix(sudo:session): session closed for user root Sep 9 00:46:53.220645 sshd[1310]: pam_unix(sshd:session): session closed for user core Sep 9 00:46:53.224354 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:34288.service: Deactivated successfully. Sep 9 00:46:53.225039 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:46:53.225190 systemd[1]: session-5.scope: Consumed 8.981s CPU time. Sep 9 00:46:53.226220 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:46:53.227178 systemd-logind[1202]: Removed session 5. Sep 9 00:46:54.185157 kubelet[1916]: E0909 00:46:54.185119 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:54.960248 kubelet[1916]: I0909 00:46:54.960203 1916 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:46:54.960595 env[1215]: time="2025-09-09T00:46:54.960556501Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:46:54.960899 kubelet[1916]: I0909 00:46:54.960772 1916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:46:55.558402 systemd[1]: Created slice kubepods-besteffort-poda95d9589_ddb9_428b_bb20_1464a8e9613f.slice. Sep 9 00:46:55.558777 kubelet[1916]: W0909 00:46:55.558727 1916 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 00:46:55.558777 kubelet[1916]: E0909 00:46:55.558769 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:46:55.559001 kubelet[1916]: W0909 00:46:55.558806 1916 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 00:46:55.559001 kubelet[1916]: W0909 00:46:55.558806 1916 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 9 00:46:55.559001 kubelet[1916]: E0909 00:46:55.558818 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:46:55.559001 kubelet[1916]: E0909 00:46:55.558853 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 9 00:46:55.575099 systemd[1]: Created slice kubepods-burstable-podf98d78af_cf33_40b4_b03a_395e37701d34.slice. Sep 9 00:46:55.629560 kubelet[1916]: I0909 00:46:55.629522 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zm8t\" (UniqueName: \"kubernetes.io/projected/a95d9589-ddb9-428b-bb20-1464a8e9613f-kube-api-access-8zm8t\") pod \"kube-proxy-lg27v\" (UID: \"a95d9589-ddb9-428b-bb20-1464a8e9613f\") " pod="kube-system/kube-proxy-lg27v" Sep 9 00:46:55.629789 kubelet[1916]: I0909 00:46:55.629770 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-config-path\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.629885 kubelet[1916]: I0909 00:46:55.629872 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a95d9589-ddb9-428b-bb20-1464a8e9613f-xtables-lock\") pod \"kube-proxy-lg27v\" (UID: \"a95d9589-ddb9-428b-bb20-1464a8e9613f\") " pod="kube-system/kube-proxy-lg27v" Sep 9 00:46:55.629989 kubelet[1916]: I0909 00:46:55.629974 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-net\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.630073 kubelet[1916]: I0909 00:46:55.630056 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-kernel\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.630185 kubelet[1916]: I0909 00:46:55.630166 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a95d9589-ddb9-428b-bb20-1464a8e9613f-lib-modules\") pod \"kube-proxy-lg27v\" (UID: \"a95d9589-ddb9-428b-bb20-1464a8e9613f\") " pod="kube-system/kube-proxy-lg27v" Sep 9 00:46:55.630280 kubelet[1916]: I0909 00:46:55.630266 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-hostproc\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.630396 kubelet[1916]: I0909 00:46:55.630381 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-cgroup\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.630487 kubelet[1916]: I0909 00:46:55.630461 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-hubble-tls\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.630581 kubelet[1916]: I0909 00:46:55.630566 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-bpf-maps\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.630689 kubelet[1916]: I0909 00:46:55.630676 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a95d9589-ddb9-428b-bb20-1464a8e9613f-kube-proxy\") pod \"kube-proxy-lg27v\" (UID: \"a95d9589-ddb9-428b-bb20-1464a8e9613f\") " pod="kube-system/kube-proxy-lg27v" Sep 9 00:46:55.630793 kubelet[1916]: I0909 00:46:55.630780 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f98d78af-cf33-40b4-b03a-395e37701d34-clustermesh-secrets\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.630907 kubelet[1916]: I0909 00:46:55.630885 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-xtables-lock\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.631019 kubelet[1916]: I0909 00:46:55.631003 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-lib-modules\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.631098 kubelet[1916]: I0909 00:46:55.631083 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8762\" (UniqueName: \"kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-kube-api-access-h8762\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.631202 kubelet[1916]: I0909 00:46:55.631188 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-run\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.631296 kubelet[1916]: I0909 00:46:55.631283 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cni-path\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.631404 kubelet[1916]: I0909 00:46:55.631381 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-etc-cni-netd\") pod \"cilium-qmhbw\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " pod="kube-system/cilium-qmhbw" Sep 9 00:46:55.740349 kubelet[1916]: I0909 00:46:55.740309 1916 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 9 00:46:55.873568 kubelet[1916]: E0909 00:46:55.872772 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:55.873724 env[1215]: time="2025-09-09T00:46:55.873491381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lg27v,Uid:a95d9589-ddb9-428b-bb20-1464a8e9613f,Namespace:kube-system,Attempt:0,}" Sep 9 00:46:55.940265 env[1215]: time="2025-09-09T00:46:55.940120036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:46:55.940265 env[1215]: time="2025-09-09T00:46:55.940157366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:46:55.940265 env[1215]: time="2025-09-09T00:46:55.940172530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:46:55.940463 env[1215]: time="2025-09-09T00:46:55.940339175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1d02d49f2e92cce5a01c6ea8d882d4b8855e2ff53f5924ed5b64c4fa82e75dd pid=2008 runtime=io.containerd.runc.v2 Sep 9 00:46:55.956115 systemd[1]: Started cri-containerd-c1d02d49f2e92cce5a01c6ea8d882d4b8855e2ff53f5924ed5b64c4fa82e75dd.scope. Sep 9 00:46:55.999811 systemd[1]: Created slice kubepods-besteffort-pod27a4f7dd_fab8_407e_b8f3_4e63e7322601.slice. Sep 9 00:46:56.014505 env[1215]: time="2025-09-09T00:46:56.014448316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lg27v,Uid:a95d9589-ddb9-428b-bb20-1464a8e9613f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d02d49f2e92cce5a01c6ea8d882d4b8855e2ff53f5924ed5b64c4fa82e75dd\"" Sep 9 00:46:56.016926 kubelet[1916]: E0909 00:46:56.016893 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:56.019423 env[1215]: time="2025-09-09T00:46:56.019342738Z" level=info msg="CreateContainer within sandbox \"c1d02d49f2e92cce5a01c6ea8d882d4b8855e2ff53f5924ed5b64c4fa82e75dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:46:56.034231 kubelet[1916]: I0909 00:46:56.034191 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27a4f7dd-fab8-407e-b8f3-4e63e7322601-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qtwmj\" (UID: \"27a4f7dd-fab8-407e-b8f3-4e63e7322601\") " pod="kube-system/cilium-operator-6c4d7847fc-qtwmj" Sep 9 00:46:56.034364 kubelet[1916]: I0909 00:46:56.034243 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dtm\" (UniqueName: \"kubernetes.io/projected/27a4f7dd-fab8-407e-b8f3-4e63e7322601-kube-api-access-q4dtm\") pod \"cilium-operator-6c4d7847fc-qtwmj\" (UID: \"27a4f7dd-fab8-407e-b8f3-4e63e7322601\") " pod="kube-system/cilium-operator-6c4d7847fc-qtwmj" Sep 9 00:46:56.038078 env[1215]: time="2025-09-09T00:46:56.038025396Z" level=info msg="CreateContainer within sandbox \"c1d02d49f2e92cce5a01c6ea8d882d4b8855e2ff53f5924ed5b64c4fa82e75dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a04c91c91014b4f7335661669e3b3e53328b02544126e0e95048dbba42702310\"" Sep 9 00:46:56.038736 env[1215]: time="2025-09-09T00:46:56.038708692Z" level=info msg="StartContainer for \"a04c91c91014b4f7335661669e3b3e53328b02544126e0e95048dbba42702310\"" Sep 9 00:46:56.052652 systemd[1]: Started cri-containerd-a04c91c91014b4f7335661669e3b3e53328b02544126e0e95048dbba42702310.scope. Sep 9 00:46:56.084688 env[1215]: time="2025-09-09T00:46:56.084643776Z" level=info msg="StartContainer for \"a04c91c91014b4f7335661669e3b3e53328b02544126e0e95048dbba42702310\" returns successfully" Sep 9 00:46:56.245314 kubelet[1916]: E0909 00:46:56.245281 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:56.733313 kubelet[1916]: E0909 00:46:56.733194 1916 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 9 00:46:56.733313 kubelet[1916]: E0909 00:46:56.733220 1916 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 9 00:46:56.733313 kubelet[1916]: E0909 00:46:56.733241 1916 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qmhbw: failed to sync secret cache: timed out waiting for the condition Sep 9 00:46:56.733313 kubelet[1916]: E0909 00:46:56.733280 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-config-path podName:f98d78af-cf33-40b4-b03a-395e37701d34 nodeName:}" failed. No retries permitted until 2025-09-09 00:46:57.23325898 +0000 UTC m=+7.099246602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-config-path") pod "cilium-qmhbw" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34") : failed to sync configmap cache: timed out waiting for the condition Sep 9 00:46:56.733313 kubelet[1916]: E0909 00:46:56.733299 1916 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-hubble-tls podName:f98d78af-cf33-40b4-b03a-395e37701d34 nodeName:}" failed. No retries permitted until 2025-09-09 00:46:57.233291349 +0000 UTC m=+7.099278971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-hubble-tls") pod "cilium-qmhbw" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34") : failed to sync secret cache: timed out waiting for the condition Sep 9 00:46:56.747872 systemd[1]: run-containerd-runc-k8s.io-c1d02d49f2e92cce5a01c6ea8d882d4b8855e2ff53f5924ed5b64c4fa82e75dd-runc.QaBr75.mount: Deactivated successfully. Sep 9 00:46:57.214608 kubelet[1916]: E0909 00:46:57.212684 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:57.214743 env[1215]: time="2025-09-09T00:46:57.213324841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qtwmj,Uid:27a4f7dd-fab8-407e-b8f3-4e63e7322601,Namespace:kube-system,Attempt:0,}" Sep 9 00:46:57.237543 env[1215]: time="2025-09-09T00:46:57.237292085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:46:57.237543 env[1215]: time="2025-09-09T00:46:57.237360022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:46:57.237543 env[1215]: time="2025-09-09T00:46:57.237387829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:46:57.237868 env[1215]: time="2025-09-09T00:46:57.237628167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4 pid=2215 runtime=io.containerd.runc.v2 Sep 9 00:46:57.263906 systemd[1]: Started cri-containerd-68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4.scope. Sep 9 00:46:57.300537 env[1215]: time="2025-09-09T00:46:57.300500539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qtwmj,Uid:27a4f7dd-fab8-407e-b8f3-4e63e7322601,Namespace:kube-system,Attempt:0,} returns sandbox id \"68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4\"" Sep 9 00:46:57.301497 kubelet[1916]: E0909 00:46:57.301333 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:57.302502 env[1215]: time="2025-09-09T00:46:57.302453255Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:46:57.380561 kubelet[1916]: E0909 00:46:57.380534 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:57.381455 env[1215]: time="2025-09-09T00:46:57.380969802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmhbw,Uid:f98d78af-cf33-40b4-b03a-395e37701d34,Namespace:kube-system,Attempt:0,}" Sep 9 00:46:57.387137 kubelet[1916]: E0909 00:46:57.387095 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:57.398788 env[1215]: time="2025-09-09T00:46:57.398704727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:46:57.398788 env[1215]: time="2025-09-09T00:46:57.398749578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:46:57.398788 env[1215]: time="2025-09-09T00:46:57.398759421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:46:57.399083 env[1215]: time="2025-09-09T00:46:57.399029446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340 pid=2255 runtime=io.containerd.runc.v2 Sep 9 00:46:57.404192 kubelet[1916]: I0909 00:46:57.404138 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lg27v" podStartSLOduration=2.404121128 podStartE2EDuration="2.404121128s" podCreationTimestamp="2025-09-09 00:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:46:56.254555948 +0000 UTC m=+6.120543570" watchObservedRunningTime="2025-09-09 00:46:57.404121128 +0000 UTC m=+7.270108750" Sep 9 00:46:57.409398 systemd[1]: Started cri-containerd-43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340.scope. Sep 9 00:46:57.433452 env[1215]: time="2025-09-09T00:46:57.433416992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qmhbw,Uid:f98d78af-cf33-40b4-b03a-395e37701d34,Namespace:kube-system,Attempt:0,} returns sandbox id \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\"" Sep 9 00:46:57.434277 kubelet[1916]: E0909 00:46:57.434237 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:58.256505 kubelet[1916]: E0909 00:46:58.256331 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:59.257390 kubelet[1916]: E0909 00:46:59.257355 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:46:59.441441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1979147514.mount: Deactivated successfully. Sep 9 00:47:00.575621 kubelet[1916]: E0909 00:47:00.575592 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:01.264994 kubelet[1916]: E0909 00:47:01.264944 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:02.266511 kubelet[1916]: E0909 00:47:02.266424 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:04.192301 kubelet[1916]: E0909 00:47:04.192266 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:05.018363 update_engine[1205]: I0909 00:47:05.017987 1205 update_attempter.cc:509] Updating boot flags... Sep 9 00:47:07.338292 env[1215]: time="2025-09-09T00:47:07.338243076Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:07.339557 env[1215]: time="2025-09-09T00:47:07.339531942Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:07.341424 env[1215]: time="2025-09-09T00:47:07.341382208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:07.341957 env[1215]: time="2025-09-09T00:47:07.341929007Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:47:07.343860 env[1215]: time="2025-09-09T00:47:07.343815439Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:47:07.345269 env[1215]: time="2025-09-09T00:47:07.345236603Z" level=info msg="CreateContainer within sandbox \"68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:47:07.359728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970946816.mount: Deactivated successfully. Sep 9 00:47:07.359993 env[1215]: time="2025-09-09T00:47:07.359724170Z" level=info msg="CreateContainer within sandbox \"68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\"" Sep 9 00:47:07.360600 env[1215]: time="2025-09-09T00:47:07.360548089Z" level=info msg="StartContainer for \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\"" Sep 9 00:47:07.375975 systemd[1]: Started cri-containerd-c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89.scope. Sep 9 00:47:07.403968 env[1215]: time="2025-09-09T00:47:07.403917337Z" level=info msg="StartContainer for \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\" returns successfully" Sep 9 00:47:08.276650 kubelet[1916]: E0909 00:47:08.276506 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:09.277648 kubelet[1916]: E0909 00:47:09.277617 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:17.484749 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:54282.service. Sep 9 00:47:17.519005 sshd[2347]: Accepted publickey for core from 10.0.0.1 port 54282 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:17.520369 sshd[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:17.524046 systemd-logind[1202]: New session 6 of user core. Sep 9 00:47:17.524519 systemd[1]: Started session-6.scope. Sep 9 00:47:17.642932 sshd[2347]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:17.645215 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:54282.service: Deactivated successfully. Sep 9 00:47:17.645972 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:47:17.646479 systemd-logind[1202]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:47:17.647224 systemd-logind[1202]: Removed session 6. Sep 9 00:47:22.647549 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:54346.service. Sep 9 00:47:22.684200 sshd[2362]: Accepted publickey for core from 10.0.0.1 port 54346 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:22.686526 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:22.690581 systemd-logind[1202]: New session 7 of user core. Sep 9 00:47:22.691424 systemd[1]: Started session-7.scope. Sep 9 00:47:22.820732 sshd[2362]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:22.824791 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:54346.service: Deactivated successfully. Sep 9 00:47:22.825599 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:47:22.826536 systemd-logind[1202]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:47:22.827368 systemd-logind[1202]: Removed session 7. Sep 9 00:47:25.720013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712254416.mount: Deactivated successfully. Sep 9 00:47:27.825619 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:54362.service. Sep 9 00:47:27.860406 sshd[2378]: Accepted publickey for core from 10.0.0.1 port 54362 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:27.862029 sshd[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:27.866055 systemd-logind[1202]: New session 8 of user core. Sep 9 00:47:27.866419 systemd[1]: Started session-8.scope. Sep 9 00:47:27.990427 sshd[2378]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:27.993081 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:54362.service: Deactivated successfully. Sep 9 00:47:27.993835 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:47:27.994348 systemd-logind[1202]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:47:27.995011 systemd-logind[1202]: Removed session 8. Sep 9 00:47:32.994909 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:48946.service. Sep 9 00:47:33.029423 sshd[2393]: Accepted publickey for core from 10.0.0.1 port 48946 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:33.030624 sshd[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:33.034058 systemd-logind[1202]: New session 9 of user core. Sep 9 00:47:33.034966 systemd[1]: Started session-9.scope. Sep 9 00:47:33.147524 sshd[2393]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:33.149951 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:48946.service: Deactivated successfully. Sep 9 00:47:33.150694 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:47:33.151207 systemd-logind[1202]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:47:33.152101 systemd-logind[1202]: Removed session 9. Sep 9 00:47:38.152386 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:48962.service. Sep 9 00:47:38.186295 sshd[2413]: Accepted publickey for core from 10.0.0.1 port 48962 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:38.187963 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:38.191913 systemd-logind[1202]: New session 10 of user core. Sep 9 00:47:38.192788 systemd[1]: Started session-10.scope. Sep 9 00:47:38.325967 sshd[2413]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:38.329066 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:48962.service: Deactivated successfully. Sep 9 00:47:38.329783 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:47:38.330366 systemd-logind[1202]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:47:38.330987 systemd-logind[1202]: Removed session 10. Sep 9 00:47:43.330381 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:38894.service. Sep 9 00:47:43.364759 sshd[2427]: Accepted publickey for core from 10.0.0.1 port 38894 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:43.366547 sshd[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:43.392238 systemd-logind[1202]: New session 11 of user core. Sep 9 00:47:43.393142 systemd[1]: Started session-11.scope. Sep 9 00:47:43.504267 sshd[2427]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:43.506870 systemd-logind[1202]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:47:43.507116 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:38894.service: Deactivated successfully. Sep 9 00:47:43.507824 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:47:43.508405 systemd-logind[1202]: Removed session 11. Sep 9 00:47:47.991224 env[1215]: time="2025-09-09T00:47:47.991176877Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:47.992988 env[1215]: time="2025-09-09T00:47:47.992949352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:47.994427 env[1215]: time="2025-09-09T00:47:47.994396813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:47.995750 env[1215]: time="2025-09-09T00:47:47.995717669Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:47:47.997773 env[1215]: time="2025-09-09T00:47:47.997740435Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:47:48.006095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392793248.mount: Deactivated successfully. Sep 9 00:47:48.009766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3693394898.mount: Deactivated successfully. Sep 9 00:47:48.013046 env[1215]: time="2025-09-09T00:47:48.013006194Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\"" Sep 9 00:47:48.013456 env[1215]: time="2025-09-09T00:47:48.013430932Z" level=info msg="StartContainer for \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\"" Sep 9 00:47:48.037751 systemd[1]: Started cri-containerd-0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f.scope. Sep 9 00:47:48.078236 systemd[1]: cri-containerd-0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f.scope: Deactivated successfully. Sep 9 00:47:48.124073 env[1215]: time="2025-09-09T00:47:48.124016352Z" level=info msg="StartContainer for \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\" returns successfully" Sep 9 00:47:48.186512 env[1215]: time="2025-09-09T00:47:48.186452281Z" level=info msg="shim disconnected" id=0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f Sep 9 00:47:48.186512 env[1215]: time="2025-09-09T00:47:48.186511404Z" level=warning msg="cleaning up after shim disconnected" id=0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f namespace=k8s.io Sep 9 00:47:48.186512 env[1215]: time="2025-09-09T00:47:48.186520444Z" level=info msg="cleaning up dead shim" Sep 9 00:47:48.193452 env[1215]: time="2025-09-09T00:47:48.193415532Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2494 runtime=io.containerd.runc.v2\n" Sep 9 00:47:48.336374 kubelet[1916]: E0909 00:47:48.335917 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:48.337886 env[1215]: time="2025-09-09T00:47:48.337849967Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:47:48.348936 env[1215]: time="2025-09-09T00:47:48.348883708Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\"" Sep 9 00:47:48.350492 env[1215]: time="2025-09-09T00:47:48.350442893Z" level=info msg="StartContainer for \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\"" Sep 9 00:47:48.355338 kubelet[1916]: I0909 00:47:48.355273 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qtwmj" podStartSLOduration=43.314512193 podStartE2EDuration="53.355258654s" podCreationTimestamp="2025-09-09 00:46:55 +0000 UTC" firstStartedPulling="2025-09-09 00:46:57.301988542 +0000 UTC m=+7.167976164" lastFinishedPulling="2025-09-09 00:47:07.342735003 +0000 UTC m=+17.208722625" observedRunningTime="2025-09-09 00:47:08.286188838 +0000 UTC m=+18.152176460" watchObservedRunningTime="2025-09-09 00:47:48.355258654 +0000 UTC m=+58.221246276" Sep 9 00:47:48.366460 systemd[1]: Started cri-containerd-a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba.scope. Sep 9 00:47:48.396283 env[1215]: time="2025-09-09T00:47:48.394878229Z" level=info msg="StartContainer for \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\" returns successfully" Sep 9 00:47:48.405262 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:47:48.405486 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:47:48.405652 systemd[1]: Stopping systemd-sysctl.service... Sep 9 00:47:48.407062 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:47:48.408061 systemd[1]: cri-containerd-a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba.scope: Deactivated successfully. Sep 9 00:47:48.421478 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:47:48.432279 env[1215]: time="2025-09-09T00:47:48.432235750Z" level=info msg="shim disconnected" id=a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba Sep 9 00:47:48.432510 env[1215]: time="2025-09-09T00:47:48.432489441Z" level=warning msg="cleaning up after shim disconnected" id=a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba namespace=k8s.io Sep 9 00:47:48.432579 env[1215]: time="2025-09-09T00:47:48.432565204Z" level=info msg="cleaning up dead shim" Sep 9 00:47:48.439690 env[1215]: time="2025-09-09T00:47:48.439652820Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2559 runtime=io.containerd.runc.v2\n" Sep 9 00:47:48.508946 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:38910.service. Sep 9 00:47:48.543777 sshd[2573]: Accepted publickey for core from 10.0.0.1 port 38910 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:48.545049 sshd[2573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:48.548053 systemd-logind[1202]: New session 12 of user core. Sep 9 00:47:48.548889 systemd[1]: Started session-12.scope. Sep 9 00:47:48.656301 sshd[2573]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:48.658618 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:38910.service: Deactivated successfully. Sep 9 00:47:48.659289 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:47:48.659830 systemd-logind[1202]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:47:48.660571 systemd-logind[1202]: Removed session 12. Sep 9 00:47:49.004378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f-rootfs.mount: Deactivated successfully. Sep 9 00:47:49.338569 kubelet[1916]: E0909 00:47:49.338334 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:49.341507 env[1215]: time="2025-09-09T00:47:49.341393250Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:47:49.364083 env[1215]: time="2025-09-09T00:47:49.364040425Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\"" Sep 9 00:47:49.365846 env[1215]: time="2025-09-09T00:47:49.364630849Z" level=info msg="StartContainer for \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\"" Sep 9 00:47:49.384969 systemd[1]: Started cri-containerd-9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c.scope. Sep 9 00:47:49.413743 env[1215]: time="2025-09-09T00:47:49.413671514Z" level=info msg="StartContainer for \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\" returns successfully" Sep 9 00:47:49.418206 systemd[1]: cri-containerd-9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c.scope: Deactivated successfully. Sep 9 00:47:49.438659 env[1215]: time="2025-09-09T00:47:49.438604704Z" level=info msg="shim disconnected" id=9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c Sep 9 00:47:49.438659 env[1215]: time="2025-09-09T00:47:49.438653026Z" level=warning msg="cleaning up after shim disconnected" id=9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c namespace=k8s.io Sep 9 00:47:49.438659 env[1215]: time="2025-09-09T00:47:49.438663346Z" level=info msg="cleaning up dead shim" Sep 9 00:47:49.444823 env[1215]: time="2025-09-09T00:47:49.444788439Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2630 runtime=io.containerd.runc.v2\n" Sep 9 00:47:50.004192 systemd[1]: run-containerd-runc-k8s.io-9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c-runc.k4Zleh.mount: Deactivated successfully. Sep 9 00:47:50.004284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c-rootfs.mount: Deactivated successfully. Sep 9 00:47:50.341659 kubelet[1916]: E0909 00:47:50.341544 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:50.345059 env[1215]: time="2025-09-09T00:47:50.344970893Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:47:50.360767 env[1215]: time="2025-09-09T00:47:50.360727897Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\"" Sep 9 00:47:50.361564 env[1215]: time="2025-09-09T00:47:50.361534370Z" level=info msg="StartContainer for \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\"" Sep 9 00:47:50.377010 systemd[1]: Started cri-containerd-969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807.scope. Sep 9 00:47:50.409491 systemd[1]: cri-containerd-969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807.scope: Deactivated successfully. Sep 9 00:47:50.409741 env[1215]: time="2025-09-09T00:47:50.409640174Z" level=info msg="StartContainer for \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\" returns successfully" Sep 9 00:47:50.428690 env[1215]: time="2025-09-09T00:47:50.428643670Z" level=info msg="shim disconnected" id=969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807 Sep 9 00:47:50.428690 env[1215]: time="2025-09-09T00:47:50.428691832Z" level=warning msg="cleaning up after shim disconnected" id=969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807 namespace=k8s.io Sep 9 00:47:50.428891 env[1215]: time="2025-09-09T00:47:50.428700833Z" level=info msg="cleaning up dead shim" Sep 9 00:47:50.435202 env[1215]: time="2025-09-09T00:47:50.435169097Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2687 runtime=io.containerd.runc.v2\n" Sep 9 00:47:51.004209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807-rootfs.mount: Deactivated successfully. Sep 9 00:47:51.344979 kubelet[1916]: E0909 00:47:51.344885 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:51.355529 env[1215]: time="2025-09-09T00:47:51.355486967Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:47:51.369304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173945957.mount: Deactivated successfully. Sep 9 00:47:51.373792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372775272.mount: Deactivated successfully. Sep 9 00:47:51.377365 env[1215]: time="2025-09-09T00:47:51.377301168Z" level=info msg="CreateContainer within sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\"" Sep 9 00:47:51.377813 env[1215]: time="2025-09-09T00:47:51.377763987Z" level=info msg="StartContainer for \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\"" Sep 9 00:47:51.391129 systemd[1]: Started cri-containerd-0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f.scope. Sep 9 00:47:51.426566 env[1215]: time="2025-09-09T00:47:51.426524157Z" level=info msg="StartContainer for \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\" returns successfully" Sep 9 00:47:51.528209 kubelet[1916]: I0909 00:47:51.527459 1916 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:47:51.563504 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:47:51.793494 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:47:52.349659 kubelet[1916]: E0909 00:47:52.349630 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:52.363489 kubelet[1916]: I0909 00:47:52.363406 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qmhbw" podStartSLOduration=6.803206881 podStartE2EDuration="57.363389587s" podCreationTimestamp="2025-09-09 00:46:55 +0000 UTC" firstStartedPulling="2025-09-09 00:46:57.436201351 +0000 UTC m=+7.302188933" lastFinishedPulling="2025-09-09 00:47:47.996384017 +0000 UTC m=+57.862371639" observedRunningTime="2025-09-09 00:47:52.363261502 +0000 UTC m=+62.229249124" watchObservedRunningTime="2025-09-09 00:47:52.363389587 +0000 UTC m=+62.229377209" Sep 9 00:47:53.351696 kubelet[1916]: E0909 00:47:53.351662 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:53.409711 systemd-networkd[1040]: cilium_host: Link UP Sep 9 00:47:53.409822 systemd-networkd[1040]: cilium_net: Link UP Sep 9 00:47:53.409824 systemd-networkd[1040]: cilium_net: Gained carrier Sep 9 00:47:53.410700 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 9 00:47:53.410830 systemd-networkd[1040]: cilium_host: Gained carrier Sep 9 00:47:53.488039 systemd-networkd[1040]: cilium_net: Gained IPv6LL Sep 9 00:47:53.491854 systemd-networkd[1040]: cilium_vxlan: Link UP Sep 9 00:47:53.491860 systemd-networkd[1040]: cilium_vxlan: Gained carrier Sep 9 00:47:53.551590 systemd-networkd[1040]: cilium_host: Gained IPv6LL Sep 9 00:47:53.660770 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:59164.service. Sep 9 00:47:53.695994 sshd[2925]: Accepted publickey for core from 10.0.0.1 port 59164 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:53.697791 sshd[2925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:53.702210 systemd[1]: Started session-13.scope. Sep 9 00:47:53.702371 systemd-logind[1202]: New session 13 of user core. Sep 9 00:47:53.744525 kernel: NET: Registered PF_ALG protocol family Sep 9 00:47:53.816493 sshd[2925]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:53.820881 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:59172.service. Sep 9 00:47:53.821385 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:59164.service: Deactivated successfully. Sep 9 00:47:53.822171 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:47:53.823100 systemd-logind[1202]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:47:53.824166 systemd-logind[1202]: Removed session 13. Sep 9 00:47:53.853877 sshd[2961]: Accepted publickey for core from 10.0.0.1 port 59172 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:53.855095 sshd[2961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:53.858859 systemd-logind[1202]: New session 14 of user core. Sep 9 00:47:53.859478 systemd[1]: Started session-14.scope. Sep 9 00:47:54.014959 sshd[2961]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:54.019871 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:59182.service. Sep 9 00:47:54.021042 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:59172.service: Deactivated successfully. Sep 9 00:47:54.022206 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:47:54.023027 systemd-logind[1202]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:47:54.029342 systemd-logind[1202]: Removed session 14. Sep 9 00:47:54.063858 sshd[3010]: Accepted publickey for core from 10.0.0.1 port 59182 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:54.065123 sshd[3010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:54.069208 systemd-logind[1202]: New session 15 of user core. Sep 9 00:47:54.069773 systemd[1]: Started session-15.scope. Sep 9 00:47:54.198869 sshd[3010]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:54.201458 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:59182.service: Deactivated successfully. Sep 9 00:47:54.202182 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:47:54.202846 systemd-logind[1202]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:47:54.203656 systemd-logind[1202]: Removed session 15. Sep 9 00:47:54.353121 kubelet[1916]: E0909 00:47:54.353024 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:54.374463 systemd-networkd[1040]: lxc_health: Link UP Sep 9 00:47:54.384488 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:47:54.385880 systemd-networkd[1040]: lxc_health: Gained carrier Sep 9 00:47:54.855617 systemd-networkd[1040]: cilium_vxlan: Gained IPv6LL Sep 9 00:47:55.383263 kubelet[1916]: E0909 00:47:55.383227 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:56.263598 systemd-networkd[1040]: lxc_health: Gained IPv6LL Sep 9 00:47:56.356638 kubelet[1916]: E0909 00:47:56.356611 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:57.357920 kubelet[1916]: E0909 00:47:57.357885 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:59.203781 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:59184.service. Sep 9 00:47:59.242531 sshd[3243]: Accepted publickey for core from 10.0.0.1 port 59184 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:59.243720 sshd[3243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:59.246902 systemd-logind[1202]: New session 16 of user core. Sep 9 00:47:59.247755 systemd[1]: Started session-16.scope. Sep 9 00:47:59.359123 sshd[3243]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:59.363005 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:59192.service. Sep 9 00:47:59.363559 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:59184.service: Deactivated successfully. Sep 9 00:47:59.364182 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:47:59.364898 systemd-logind[1202]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:47:59.365758 systemd-logind[1202]: Removed session 16. Sep 9 00:47:59.395512 sshd[3256]: Accepted publickey for core from 10.0.0.1 port 59192 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:59.396933 sshd[3256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:59.399952 systemd-logind[1202]: New session 17 of user core. Sep 9 00:47:59.400748 systemd[1]: Started session-17.scope. Sep 9 00:47:59.576666 sshd[3256]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:59.580104 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:59198.service. Sep 9 00:47:59.580890 systemd-logind[1202]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:47:59.581007 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:59192.service: Deactivated successfully. Sep 9 00:47:59.581645 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:47:59.582398 systemd-logind[1202]: Removed session 17. Sep 9 00:47:59.614565 sshd[3268]: Accepted publickey for core from 10.0.0.1 port 59198 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:59.615656 sshd[3268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:59.619051 systemd-logind[1202]: New session 18 of user core. Sep 9 00:47:59.619644 systemd[1]: Started session-18.scope. Sep 9 00:48:00.190524 sshd[3268]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:00.194024 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:44718.service. Sep 9 00:48:00.194551 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:59198.service: Deactivated successfully. Sep 9 00:48:00.195289 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:48:00.195890 systemd-logind[1202]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:48:00.196851 systemd-logind[1202]: Removed session 18. Sep 9 00:48:00.232174 sshd[3285]: Accepted publickey for core from 10.0.0.1 port 44718 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:00.233378 sshd[3285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:00.236719 systemd-logind[1202]: New session 19 of user core. Sep 9 00:48:00.237528 systemd[1]: Started session-19.scope. Sep 9 00:48:00.448773 sshd[3285]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:00.451428 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:44734.service. Sep 9 00:48:00.455777 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:44718.service: Deactivated successfully. Sep 9 00:48:00.456484 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:48:00.457377 systemd-logind[1202]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:48:00.458583 systemd-logind[1202]: Removed session 19. Sep 9 00:48:00.483748 sshd[3299]: Accepted publickey for core from 10.0.0.1 port 44734 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:00.484946 sshd[3299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:00.487990 systemd-logind[1202]: New session 20 of user core. Sep 9 00:48:00.488820 systemd[1]: Started session-20.scope. Sep 9 00:48:00.600828 sshd[3299]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:00.603095 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:44734.service: Deactivated successfully. Sep 9 00:48:00.603895 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:48:00.604426 systemd-logind[1202]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:48:00.605179 systemd-logind[1202]: Removed session 20. Sep 9 00:48:05.605600 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:44750.service. Sep 9 00:48:05.637825 sshd[3315]: Accepted publickey for core from 10.0.0.1 port 44750 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:05.639387 sshd[3315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:05.642664 systemd-logind[1202]: New session 21 of user core. Sep 9 00:48:05.643517 systemd[1]: Started session-21.scope. Sep 9 00:48:05.747771 sshd[3315]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:05.749878 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:48:05.750496 systemd-logind[1202]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:48:05.750664 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:44750.service: Deactivated successfully. Sep 9 00:48:05.751638 systemd-logind[1202]: Removed session 21. Sep 9 00:48:09.221117 kubelet[1916]: E0909 00:48:09.221079 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:10.752568 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:41406.service. Sep 9 00:48:10.784997 sshd[3328]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:10.786587 sshd[3328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:10.790177 systemd-logind[1202]: New session 22 of user core. Sep 9 00:48:10.790685 systemd[1]: Started session-22.scope. Sep 9 00:48:10.897041 sshd[3328]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:10.899578 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:41406.service: Deactivated successfully. Sep 9 00:48:10.900297 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:48:10.900888 systemd-logind[1202]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:48:10.901736 systemd-logind[1202]: Removed session 22. Sep 9 00:48:15.901742 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:41422.service. Sep 9 00:48:15.933680 sshd[3341]: Accepted publickey for core from 10.0.0.1 port 41422 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:15.935189 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:15.938219 systemd-logind[1202]: New session 23 of user core. Sep 9 00:48:15.939058 systemd[1]: Started session-23.scope. Sep 9 00:48:16.055588 sshd[3341]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:16.059347 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:41438.service. Sep 9 00:48:16.059938 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:41422.service: Deactivated successfully. Sep 9 00:48:16.060607 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:48:16.061138 systemd-logind[1202]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:48:16.061923 systemd-logind[1202]: Removed session 23. Sep 9 00:48:16.093152 sshd[3354]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:16.094305 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:16.098078 systemd-logind[1202]: New session 24 of user core. Sep 9 00:48:16.098893 systemd[1]: Started session-24.scope. Sep 9 00:48:17.764197 env[1215]: time="2025-09-09T00:48:17.764132489Z" level=info msg="StopContainer for \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\" with timeout 30 (s)" Sep 9 00:48:17.764769 env[1215]: time="2025-09-09T00:48:17.764497812Z" level=info msg="Stop container \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\" with signal terminated" Sep 9 00:48:17.775940 systemd[1]: cri-containerd-c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89.scope: Deactivated successfully. Sep 9 00:48:17.795992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89-rootfs.mount: Deactivated successfully. Sep 9 00:48:17.803373 env[1215]: time="2025-09-09T00:48:17.803319848Z" level=info msg="shim disconnected" id=c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89 Sep 9 00:48:17.803373 env[1215]: time="2025-09-09T00:48:17.803373888Z" level=warning msg="cleaning up after shim disconnected" id=c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89 namespace=k8s.io Sep 9 00:48:17.803570 env[1215]: time="2025-09-09T00:48:17.803384448Z" level=info msg="cleaning up dead shim" Sep 9 00:48:17.804457 env[1215]: time="2025-09-09T00:48:17.804402057Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:48:17.809545 env[1215]: time="2025-09-09T00:48:17.809503218Z" level=info msg="StopContainer for \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\" with timeout 2 (s)" Sep 9 00:48:17.809838 env[1215]: time="2025-09-09T00:48:17.809794021Z" level=info msg="Stop container \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\" with signal terminated" Sep 9 00:48:17.810990 env[1215]: time="2025-09-09T00:48:17.810957310Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3400 runtime=io.containerd.runc.v2\n" Sep 9 00:48:17.813199 env[1215]: time="2025-09-09T00:48:17.813164448Z" level=info msg="StopContainer for \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\" returns successfully" Sep 9 00:48:17.813727 env[1215]: time="2025-09-09T00:48:17.813702732Z" level=info msg="StopPodSandbox for \"68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4\"" Sep 9 00:48:17.813800 env[1215]: time="2025-09-09T00:48:17.813761733Z" level=info msg="Container to stop \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.815295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4-shm.mount: Deactivated successfully. Sep 9 00:48:17.817355 systemd-networkd[1040]: lxc_health: Link DOWN Sep 9 00:48:17.817361 systemd-networkd[1040]: lxc_health: Lost carrier Sep 9 00:48:17.819369 systemd[1]: cri-containerd-68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4.scope: Deactivated successfully. Sep 9 00:48:17.845750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4-rootfs.mount: Deactivated successfully. Sep 9 00:48:17.846386 systemd[1]: cri-containerd-0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f.scope: Deactivated successfully. Sep 9 00:48:17.846736 systemd[1]: cri-containerd-0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f.scope: Consumed 5.800s CPU time. Sep 9 00:48:17.848863 env[1215]: time="2025-09-09T00:48:17.848813058Z" level=info msg="shim disconnected" id=68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4 Sep 9 00:48:17.848863 env[1215]: time="2025-09-09T00:48:17.848854699Z" level=warning msg="cleaning up after shim disconnected" id=68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4 namespace=k8s.io Sep 9 00:48:17.849008 env[1215]: time="2025-09-09T00:48:17.848874059Z" level=info msg="cleaning up dead shim" Sep 9 00:48:17.857031 env[1215]: time="2025-09-09T00:48:17.856992125Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3443 runtime=io.containerd.runc.v2\n" Sep 9 00:48:17.857318 env[1215]: time="2025-09-09T00:48:17.857289727Z" level=info msg="TearDown network for sandbox \"68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4\" successfully" Sep 9 00:48:17.857363 env[1215]: time="2025-09-09T00:48:17.857317248Z" level=info msg="StopPodSandbox for \"68959a92b73aebbba621eafca558885c8cd600d5696bf8d330d26f2fd41dc7b4\" returns successfully" Sep 9 00:48:17.870085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f-rootfs.mount: Deactivated successfully. Sep 9 00:48:17.876995 env[1215]: time="2025-09-09T00:48:17.876952408Z" level=info msg="shim disconnected" id=0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f Sep 9 00:48:17.876995 env[1215]: time="2025-09-09T00:48:17.876993448Z" level=warning msg="cleaning up after shim disconnected" id=0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f namespace=k8s.io Sep 9 00:48:17.877182 env[1215]: time="2025-09-09T00:48:17.877003208Z" level=info msg="cleaning up dead shim" Sep 9 00:48:17.883578 kubelet[1916]: I0909 00:48:17.883543 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27a4f7dd-fab8-407e-b8f3-4e63e7322601-cilium-config-path\") pod \"27a4f7dd-fab8-407e-b8f3-4e63e7322601\" (UID: \"27a4f7dd-fab8-407e-b8f3-4e63e7322601\") " Sep 9 00:48:17.883873 kubelet[1916]: I0909 00:48:17.883602 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4dtm\" (UniqueName: \"kubernetes.io/projected/27a4f7dd-fab8-407e-b8f3-4e63e7322601-kube-api-access-q4dtm\") pod \"27a4f7dd-fab8-407e-b8f3-4e63e7322601\" (UID: \"27a4f7dd-fab8-407e-b8f3-4e63e7322601\") " Sep 9 00:48:17.883905 env[1215]: time="2025-09-09T00:48:17.883651462Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3465 runtime=io.containerd.runc.v2\n" Sep 9 00:48:17.885597 env[1215]: time="2025-09-09T00:48:17.885564558Z" level=info msg="StopContainer for \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\" returns successfully" Sep 9 00:48:17.886050 env[1215]: time="2025-09-09T00:48:17.886010921Z" level=info msg="StopPodSandbox for \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\"" Sep 9 00:48:17.886099 env[1215]: time="2025-09-09T00:48:17.886082562Z" level=info msg="Container to stop \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.886136 env[1215]: time="2025-09-09T00:48:17.886099842Z" level=info msg="Container to stop \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.886136 env[1215]: time="2025-09-09T00:48:17.886112162Z" level=info msg="Container to stop \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.886136 env[1215]: time="2025-09-09T00:48:17.886122922Z" level=info msg="Container to stop \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.886136 env[1215]: time="2025-09-09T00:48:17.886132762Z" level=info msg="Container to stop \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.888889 kubelet[1916]: I0909 00:48:17.888549 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a4f7dd-fab8-407e-b8f3-4e63e7322601-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27a4f7dd-fab8-407e-b8f3-4e63e7322601" (UID: "27a4f7dd-fab8-407e-b8f3-4e63e7322601"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:48:17.889426 kubelet[1916]: I0909 00:48:17.889392 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a4f7dd-fab8-407e-b8f3-4e63e7322601-kube-api-access-q4dtm" (OuterVolumeSpecName: "kube-api-access-q4dtm") pod "27a4f7dd-fab8-407e-b8f3-4e63e7322601" (UID: "27a4f7dd-fab8-407e-b8f3-4e63e7322601"). InnerVolumeSpecName "kube-api-access-q4dtm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:17.891430 systemd[1]: cri-containerd-43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340.scope: Deactivated successfully. Sep 9 00:48:17.915642 env[1215]: time="2025-09-09T00:48:17.915588682Z" level=info msg="shim disconnected" id=43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340 Sep 9 00:48:17.915642 env[1215]: time="2025-09-09T00:48:17.915637683Z" level=warning msg="cleaning up after shim disconnected" id=43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340 namespace=k8s.io Sep 9 00:48:17.915642 env[1215]: time="2025-09-09T00:48:17.915646203Z" level=info msg="cleaning up dead shim" Sep 9 00:48:17.925402 env[1215]: time="2025-09-09T00:48:17.923074383Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3497 runtime=io.containerd.runc.v2\n" Sep 9 00:48:17.925402 env[1215]: time="2025-09-09T00:48:17.923404626Z" level=info msg="TearDown network for sandbox \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" successfully" Sep 9 00:48:17.925402 env[1215]: time="2025-09-09T00:48:17.923427466Z" level=info msg="StopPodSandbox for \"43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340\" returns successfully" Sep 9 00:48:17.984183 kubelet[1916]: I0909 00:48:17.984151 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-xtables-lock\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.984377 kubelet[1916]: I0909 00:48:17.984362 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-bpf-maps\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.984459 kubelet[1916]: I0909 00:48:17.984446 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8762\" (UniqueName: \"kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-kube-api-access-h8762\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.984587 kubelet[1916]: I0909 00:48:17.984573 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-hubble-tls\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.984700 kubelet[1916]: I0909 00:48:17.984688 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-etc-cni-netd\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.984811 kubelet[1916]: I0909 00:48:17.984798 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-net\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.984896 kubelet[1916]: I0909 00:48:17.984881 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-cgroup\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.984980 kubelet[1916]: I0909 00:48:17.984288 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985023 kubelet[1916]: I0909 00:48:17.984498 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985023 kubelet[1916]: I0909 00:48:17.984769 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985023 kubelet[1916]: I0909 00:48:17.984840 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985023 kubelet[1916]: I0909 00:48:17.984949 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985023 kubelet[1916]: I0909 00:48:17.984960 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-lib-modules\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.985143 kubelet[1916]: I0909 00:48:17.985031 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-kernel\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.985143 kubelet[1916]: I0909 00:48:17.985047 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-hostproc\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.985143 kubelet[1916]: I0909 00:48:17.985072 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f98d78af-cf33-40b4-b03a-395e37701d34-clustermesh-secrets\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.985143 kubelet[1916]: I0909 00:48:17.985087 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-run\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.985143 kubelet[1916]: I0909 00:48:17.985113 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-config-path\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.985143 kubelet[1916]: I0909 00:48:17.985130 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cni-path\") pod \"f98d78af-cf33-40b4-b03a-395e37701d34\" (UID: \"f98d78af-cf33-40b4-b03a-395e37701d34\") " Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985174 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985183 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985193 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4dtm\" (UniqueName: \"kubernetes.io/projected/27a4f7dd-fab8-407e-b8f3-4e63e7322601-kube-api-access-q4dtm\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985201 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985209 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985216 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27a4f7dd-fab8-407e-b8f3-4e63e7322601-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985224 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:17.985286 kubelet[1916]: I0909 00:48:17.985244 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cni-path" (OuterVolumeSpecName: "cni-path") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985483 kubelet[1916]: I0909 00:48:17.985258 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985483 kubelet[1916]: I0909 00:48:17.985277 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-hostproc" (OuterVolumeSpecName: "hostproc") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985556 kubelet[1916]: I0909 00:48:17.985529 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.985652 kubelet[1916]: I0909 00:48:17.985621 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.987419 kubelet[1916]: I0909 00:48:17.987379 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:17.987495 kubelet[1916]: I0909 00:48:17.987423 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:48:17.987773 kubelet[1916]: I0909 00:48:17.987746 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-kube-api-access-h8762" (OuterVolumeSpecName: "kube-api-access-h8762") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "kube-api-access-h8762". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:17.989173 kubelet[1916]: I0909 00:48:17.989140 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98d78af-cf33-40b4-b03a-395e37701d34-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f98d78af-cf33-40b4-b03a-395e37701d34" (UID: "f98d78af-cf33-40b4-b03a-395e37701d34"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085556 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085593 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8762\" (UniqueName: \"kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-kube-api-access-h8762\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085605 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f98d78af-cf33-40b4-b03a-395e37701d34-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085614 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085622 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085631 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085658 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.087685 kubelet[1916]: I0909 00:48:18.085667 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f98d78af-cf33-40b4-b03a-395e37701d34-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.088050 kubelet[1916]: I0909 00:48:18.085674 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f98d78af-cf33-40b4-b03a-395e37701d34-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:18.227619 systemd[1]: Removed slice kubepods-burstable-podf98d78af_cf33_40b4_b03a_395e37701d34.slice. Sep 9 00:48:18.227711 systemd[1]: kubepods-burstable-podf98d78af_cf33_40b4_b03a_395e37701d34.slice: Consumed 5.914s CPU time. Sep 9 00:48:18.229227 systemd[1]: Removed slice kubepods-besteffort-pod27a4f7dd_fab8_407e_b8f3_4e63e7322601.slice. Sep 9 00:48:18.398780 kubelet[1916]: I0909 00:48:18.398743 1916 scope.go:117] "RemoveContainer" containerID="0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f" Sep 9 00:48:18.401100 env[1215]: time="2025-09-09T00:48:18.401063538Z" level=info msg="RemoveContainer for \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\"" Sep 9 00:48:18.405261 env[1215]: time="2025-09-09T00:48:18.405222334Z" level=info msg="RemoveContainer for \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\" returns successfully" Sep 9 00:48:18.405445 kubelet[1916]: I0909 00:48:18.405422 1916 scope.go:117] "RemoveContainer" containerID="969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807" Sep 9 00:48:18.406657 env[1215]: time="2025-09-09T00:48:18.406619666Z" level=info msg="RemoveContainer for \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\"" Sep 9 00:48:18.409502 env[1215]: time="2025-09-09T00:48:18.409443291Z" level=info msg="RemoveContainer for \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\" returns successfully" Sep 9 00:48:18.409704 kubelet[1916]: I0909 00:48:18.409681 1916 scope.go:117] "RemoveContainer" containerID="9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c" Sep 9 00:48:18.410583 env[1215]: time="2025-09-09T00:48:18.410553541Z" level=info msg="RemoveContainer for \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\"" Sep 9 00:48:18.412819 env[1215]: time="2025-09-09T00:48:18.412746520Z" level=info msg="RemoveContainer for \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\" returns successfully" Sep 9 00:48:18.413082 kubelet[1916]: I0909 00:48:18.413060 1916 scope.go:117] "RemoveContainer" containerID="a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba" Sep 9 00:48:18.414490 env[1215]: time="2025-09-09T00:48:18.414406935Z" level=info msg="RemoveContainer for \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\"" Sep 9 00:48:18.416710 env[1215]: time="2025-09-09T00:48:18.416666515Z" level=info msg="RemoveContainer for \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\" returns successfully" Sep 9 00:48:18.416853 kubelet[1916]: I0909 00:48:18.416829 1916 scope.go:117] "RemoveContainer" containerID="0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f" Sep 9 00:48:18.418822 env[1215]: time="2025-09-09T00:48:18.418789254Z" level=info msg="RemoveContainer for \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\"" Sep 9 00:48:18.421108 env[1215]: time="2025-09-09T00:48:18.421079554Z" level=info msg="RemoveContainer for \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\" returns successfully" Sep 9 00:48:18.422184 kubelet[1916]: I0909 00:48:18.422105 1916 scope.go:117] "RemoveContainer" containerID="0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f" Sep 9 00:48:18.422453 env[1215]: time="2025-09-09T00:48:18.422383405Z" level=error msg="ContainerStatus for \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\": not found" Sep 9 00:48:18.422671 kubelet[1916]: E0909 00:48:18.422651 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\": not found" containerID="0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f" Sep 9 00:48:18.423835 kubelet[1916]: I0909 00:48:18.423736 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f"} err="failed to get container status \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d77e6489ebf8a27fdf98e3df5bc89c82f4611768e256c1e5c28b56b560ac74f\": not found" Sep 9 00:48:18.423835 kubelet[1916]: I0909 00:48:18.423829 1916 scope.go:117] "RemoveContainer" containerID="969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807" Sep 9 00:48:18.424114 env[1215]: time="2025-09-09T00:48:18.424059020Z" level=error msg="ContainerStatus for \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\": not found" Sep 9 00:48:18.424218 kubelet[1916]: E0909 00:48:18.424195 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\": not found" containerID="969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807" Sep 9 00:48:18.424273 kubelet[1916]: I0909 00:48:18.424251 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807"} err="failed to get container status \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\": rpc error: code = NotFound desc = an error occurred when try to find container \"969eba76e743b162b01a821958fd766683cd665b367d90f6bb77464e29dac807\": not found" Sep 9 00:48:18.424305 kubelet[1916]: I0909 00:48:18.424275 1916 scope.go:117] "RemoveContainer" containerID="9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c" Sep 9 00:48:18.424707 env[1215]: time="2025-09-09T00:48:18.424654905Z" level=error msg="ContainerStatus for \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\": not found" Sep 9 00:48:18.424895 kubelet[1916]: E0909 00:48:18.424818 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\": not found" containerID="9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c" Sep 9 00:48:18.424895 kubelet[1916]: I0909 00:48:18.424842 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c"} err="failed to get container status \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b3a28f8e78ca679d080bc35a56d59af683cab7f642d86b2417f63a3812d888c\": not found" Sep 9 00:48:18.424895 kubelet[1916]: I0909 00:48:18.424857 1916 scope.go:117] "RemoveContainer" containerID="a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba" Sep 9 00:48:18.425155 env[1215]: time="2025-09-09T00:48:18.425105829Z" level=error msg="ContainerStatus for \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\": not found" Sep 9 00:48:18.425280 kubelet[1916]: E0909 00:48:18.425262 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\": not found" containerID="a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba" Sep 9 00:48:18.425333 kubelet[1916]: I0909 00:48:18.425284 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba"} err="failed to get container status \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"a218144b128a3ecb200acaed15bb9d3f2ef3e1fc64cd95e81eb2bc186eb5f1ba\": not found" Sep 9 00:48:18.425333 kubelet[1916]: I0909 00:48:18.425299 1916 scope.go:117] "RemoveContainer" containerID="0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f" Sep 9 00:48:18.425600 env[1215]: time="2025-09-09T00:48:18.425552433Z" level=error msg="ContainerStatus for \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\": not found" Sep 9 00:48:18.425721 kubelet[1916]: E0909 00:48:18.425695 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\": not found" containerID="0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f" Sep 9 00:48:18.425759 kubelet[1916]: I0909 00:48:18.425739 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f"} err="failed to get container status \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a98e3935ab011da7550c370376af18a03a1356526bb73ea05d4abc1da9e1b1f\": not found" Sep 9 00:48:18.425759 kubelet[1916]: I0909 00:48:18.425756 1916 scope.go:117] "RemoveContainer" containerID="c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89" Sep 9 00:48:18.426704 env[1215]: time="2025-09-09T00:48:18.426668523Z" level=info msg="RemoveContainer for \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\"" Sep 9 00:48:18.428771 env[1215]: time="2025-09-09T00:48:18.428732861Z" level=info msg="RemoveContainer for \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\" returns successfully" Sep 9 00:48:18.428922 kubelet[1916]: I0909 00:48:18.428902 1916 scope.go:117] "RemoveContainer" containerID="c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89" Sep 9 00:48:18.429193 env[1215]: time="2025-09-09T00:48:18.429140385Z" level=error msg="ContainerStatus for \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\": not found" Sep 9 00:48:18.429336 kubelet[1916]: E0909 00:48:18.429317 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\": not found" containerID="c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89" Sep 9 00:48:18.429383 kubelet[1916]: I0909 00:48:18.429352 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89"} err="failed to get container status \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5952007b71350ddc4bad49b58934234f1214a2eeefa87abef2bfc324d727b89\": not found" Sep 9 00:48:18.783030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340-rootfs.mount: Deactivated successfully. Sep 9 00:48:18.783131 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43e7d859106575e731947e9d30bf88e1e12452de00d61fee7057d49afd07c340-shm.mount: Deactivated successfully. Sep 9 00:48:18.783185 systemd[1]: var-lib-kubelet-pods-f98d78af\x2dcf33\x2d40b4\x2db03a\x2d395e37701d34-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:48:18.783254 systemd[1]: var-lib-kubelet-pods-f98d78af\x2dcf33\x2d40b4\x2db03a\x2d395e37701d34-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:48:18.783307 systemd[1]: var-lib-kubelet-pods-27a4f7dd\x2dfab8\x2d407e\x2db8f3\x2d4e63e7322601-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4dtm.mount: Deactivated successfully. Sep 9 00:48:18.783357 systemd[1]: var-lib-kubelet-pods-f98d78af\x2dcf33\x2d40b4\x2db03a\x2d395e37701d34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh8762.mount: Deactivated successfully. Sep 9 00:48:19.221129 kubelet[1916]: E0909 00:48:19.221094 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:19.725928 sshd[3354]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:19.730189 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:41438.service: Deactivated successfully. Sep 9 00:48:19.730797 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:48:19.731366 systemd-logind[1202]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:48:19.732438 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:41446.service. Sep 9 00:48:19.733221 systemd-logind[1202]: Removed session 24. Sep 9 00:48:19.766110 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 41446 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:19.767294 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:19.770644 systemd-logind[1202]: New session 25 of user core. Sep 9 00:48:19.771609 systemd[1]: Started session-25.scope. Sep 9 00:48:20.224759 kubelet[1916]: I0909 00:48:20.224723 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a4f7dd-fab8-407e-b8f3-4e63e7322601" path="/var/lib/kubelet/pods/27a4f7dd-fab8-407e-b8f3-4e63e7322601/volumes" Sep 9 00:48:20.225128 kubelet[1916]: I0909 00:48:20.225109 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98d78af-cf33-40b4-b03a-395e37701d34" path="/var/lib/kubelet/pods/f98d78af-cf33-40b4-b03a-395e37701d34/volumes" Sep 9 00:48:20.269059 kubelet[1916]: E0909 00:48:20.268975 1916 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:48:20.851807 sshd[3514]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:20.855298 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:49920.service. Sep 9 00:48:20.856625 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:41446.service: Deactivated successfully. Sep 9 00:48:20.857314 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:48:20.858574 systemd-logind[1202]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:48:20.859966 systemd-logind[1202]: Removed session 25. Sep 9 00:48:20.878134 kubelet[1916]: I0909 00:48:20.878084 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="27a4f7dd-fab8-407e-b8f3-4e63e7322601" containerName="cilium-operator" Sep 9 00:48:20.878134 kubelet[1916]: I0909 00:48:20.878111 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="f98d78af-cf33-40b4-b03a-395e37701d34" containerName="cilium-agent" Sep 9 00:48:20.883874 systemd[1]: Created slice kubepods-burstable-podb6385f45_81c1_4915_8f30_2f92271d534d.slice. Sep 9 00:48:20.895513 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 49920 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:20.897074 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:20.902161 systemd[1]: Started session-26.scope. Sep 9 00:48:20.902435 systemd-logind[1202]: New session 26 of user core. Sep 9 00:48:20.902914 kubelet[1916]: I0909 00:48:20.902883 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-ipsec-secrets\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.902979 kubelet[1916]: I0909 00:48:20.902938 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-bpf-maps\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.902979 kubelet[1916]: I0909 00:48:20.902958 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-xtables-lock\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.902979 kubelet[1916]: I0909 00:48:20.902977 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-clustermesh-secrets\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903059 kubelet[1916]: I0909 00:48:20.903003 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-kernel\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903059 kubelet[1916]: I0909 00:48:20.903020 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-hubble-tls\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903059 kubelet[1916]: I0909 00:48:20.903039 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cni-path\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903059 kubelet[1916]: I0909 00:48:20.903054 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhfj\" (UniqueName: \"kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-kube-api-access-twhfj\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903145 kubelet[1916]: I0909 00:48:20.903078 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-hostproc\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903145 kubelet[1916]: I0909 00:48:20.903095 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-cgroup\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903145 kubelet[1916]: I0909 00:48:20.903117 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-etc-cni-netd\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903145 kubelet[1916]: I0909 00:48:20.903134 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-config-path\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903233 kubelet[1916]: I0909 00:48:20.903157 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-net\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903233 kubelet[1916]: I0909 00:48:20.903174 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-run\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:20.903233 kubelet[1916]: I0909 00:48:20.903193 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-lib-modules\") pod \"cilium-sg722\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " pod="kube-system/cilium-sg722" Sep 9 00:48:21.032329 sshd[3525]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:21.037093 systemd[1]: Started sshd@26-10.0.0.137:22-10.0.0.1:49930.service. Sep 9 00:48:21.037634 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:49920.service: Deactivated successfully. Sep 9 00:48:21.038288 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:48:21.038858 systemd-logind[1202]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:48:21.039858 systemd-logind[1202]: Removed session 26. Sep 9 00:48:21.049490 kubelet[1916]: E0909 00:48:21.048256 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:21.049618 env[1215]: time="2025-09-09T00:48:21.048890437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg722,Uid:b6385f45-81c1-4915-8f30-2f92271d534d,Namespace:kube-system,Attempt:0,}" Sep 9 00:48:21.066002 env[1215]: time="2025-09-09T00:48:21.065934899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:21.066002 env[1215]: time="2025-09-09T00:48:21.065973299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:21.066002 env[1215]: time="2025-09-09T00:48:21.065983459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:21.066187 env[1215]: time="2025-09-09T00:48:21.066094460Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5 pid=3552 runtime=io.containerd.runc.v2 Sep 9 00:48:21.078481 systemd[1]: Started cri-containerd-32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5.scope. Sep 9 00:48:21.079888 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 49930 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:48:21.081224 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:48:21.085514 systemd-logind[1202]: New session 27 of user core. Sep 9 00:48:21.086221 systemd[1]: Started session-27.scope. Sep 9 00:48:21.106973 env[1215]: time="2025-09-09T00:48:21.106884174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sg722,Uid:b6385f45-81c1-4915-8f30-2f92271d534d,Namespace:kube-system,Attempt:0,} returns sandbox id \"32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5\"" Sep 9 00:48:21.107801 kubelet[1916]: E0909 00:48:21.107705 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:21.110813 env[1215]: time="2025-09-09T00:48:21.110777856Z" level=info msg="CreateContainer within sandbox \"32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:48:21.120678 env[1215]: time="2025-09-09T00:48:21.120630400Z" level=info msg="CreateContainer within sandbox \"32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\"" Sep 9 00:48:21.121140 env[1215]: time="2025-09-09T00:48:21.121114485Z" level=info msg="StartContainer for \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\"" Sep 9 00:48:21.134050 systemd[1]: Started cri-containerd-87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec.scope. Sep 9 00:48:21.148174 systemd[1]: cri-containerd-87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec.scope: Deactivated successfully. Sep 9 00:48:21.166246 env[1215]: time="2025-09-09T00:48:21.166171045Z" level=info msg="shim disconnected" id=87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec Sep 9 00:48:21.166246 env[1215]: time="2025-09-09T00:48:21.166234925Z" level=warning msg="cleaning up after shim disconnected" id=87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec namespace=k8s.io Sep 9 00:48:21.166246 env[1215]: time="2025-09-09T00:48:21.166245886Z" level=info msg="cleaning up dead shim" Sep 9 00:48:21.178838 env[1215]: time="2025-09-09T00:48:21.178786379Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3618 runtime=io.containerd.runc.v2\ntime=\"2025-09-09T00:48:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 9 00:48:21.179501 env[1215]: time="2025-09-09T00:48:21.179028661Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Sep 9 00:48:21.179501 env[1215]: time="2025-09-09T00:48:21.179263624Z" level=error msg="Failed to pipe stdout of container \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\"" error="reading from a closed fifo" Sep 9 00:48:21.179501 env[1215]: time="2025-09-09T00:48:21.179333665Z" level=error msg="Failed to pipe stderr of container \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\"" error="reading from a closed fifo" Sep 9 00:48:21.182028 env[1215]: time="2025-09-09T00:48:21.181972693Z" level=error msg="StartContainer for \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 9 00:48:21.182237 kubelet[1916]: E0909 00:48:21.182204 1916 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec" Sep 9 00:48:21.182644 kubelet[1916]: E0909 00:48:21.182612 1916 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 9 00:48:21.182644 kubelet[1916]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 9 00:48:21.182644 kubelet[1916]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 9 00:48:21.182644 kubelet[1916]: rm /hostbin/cilium-mount Sep 9 00:48:21.182798 kubelet[1916]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twhfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-sg722_kube-system(b6385f45-81c1-4915-8f30-2f92271d534d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 9 00:48:21.182798 kubelet[1916]: > logger="UnhandledError" Sep 9 00:48:21.184430 kubelet[1916]: E0909 00:48:21.184380 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-sg722" podUID="b6385f45-81c1-4915-8f30-2f92271d534d" Sep 9 00:48:21.413770 env[1215]: time="2025-09-09T00:48:21.413724318Z" level=info msg="StopPodSandbox for \"32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5\"" Sep 9 00:48:21.413925 env[1215]: time="2025-09-09T00:48:21.413788999Z" level=info msg="Container to stop \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:21.420509 systemd[1]: cri-containerd-32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5.scope: Deactivated successfully. Sep 9 00:48:21.445578 env[1215]: time="2025-09-09T00:48:21.445523536Z" level=info msg="shim disconnected" id=32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5 Sep 9 00:48:21.445742 env[1215]: time="2025-09-09T00:48:21.445582297Z" level=warning msg="cleaning up after shim disconnected" id=32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5 namespace=k8s.io Sep 9 00:48:21.445742 env[1215]: time="2025-09-09T00:48:21.445594177Z" level=info msg="cleaning up dead shim" Sep 9 00:48:21.452209 env[1215]: time="2025-09-09T00:48:21.452174167Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3650 runtime=io.containerd.runc.v2\n" Sep 9 00:48:21.452499 env[1215]: time="2025-09-09T00:48:21.452454650Z" level=info msg="TearDown network for sandbox \"32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5\" successfully" Sep 9 00:48:21.452546 env[1215]: time="2025-09-09T00:48:21.452502010Z" level=info msg="StopPodSandbox for \"32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5\" returns successfully" Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.506967 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-ipsec-secrets\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507012 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-bpf-maps\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507043 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-lib-modules\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507059 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-run\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507115 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507141 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507087 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-cgroup\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507157 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507179 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507191 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-etc-cni-netd\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507201 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507227 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-xtables-lock\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507247 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cni-path\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507257 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.507508 kubelet[1916]: I0909 00:48:21.507267 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twhfj\" (UniqueName: \"kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-kube-api-access-twhfj\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507274 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507284 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-kernel\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507302 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-config-path\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507316 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-net\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507334 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-clustermesh-secrets\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507349 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-hubble-tls\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507363 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-hostproc\") pod \"b6385f45-81c1-4915-8f30-2f92271d534d\" (UID: \"b6385f45-81c1-4915-8f30-2f92271d534d\") " Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507394 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507403 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507412 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507420 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507428 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507436 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507445 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507462 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.508284 kubelet[1916]: I0909 00:48:21.507508 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.508771 kubelet[1916]: I0909 00:48:21.508541 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:21.509659 kubelet[1916]: I0909 00:48:21.509563 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:48:21.510185 kubelet[1916]: I0909 00:48:21.510152 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-kube-api-access-twhfj" (OuterVolumeSpecName: "kube-api-access-twhfj") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "kube-api-access-twhfj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:21.511884 kubelet[1916]: I0909 00:48:21.511839 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:48:21.511884 kubelet[1916]: I0909 00:48:21.511859 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:21.512089 kubelet[1916]: I0909 00:48:21.512050 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6385f45-81c1-4915-8f30-2f92271d534d" (UID: "b6385f45-81c1-4915-8f30-2f92271d534d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:48:21.608391 kubelet[1916]: I0909 00:48:21.608348 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twhfj\" (UniqueName: \"kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-kube-api-access-twhfj\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.608391 kubelet[1916]: I0909 00:48:21.608383 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.608391 kubelet[1916]: I0909 00:48:21.608392 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.608571 kubelet[1916]: I0909 00:48:21.608402 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.608571 kubelet[1916]: I0909 00:48:21.608410 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.608571 kubelet[1916]: I0909 00:48:21.608419 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6385f45-81c1-4915-8f30-2f92271d534d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.608571 kubelet[1916]: I0909 00:48:21.608428 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6385f45-81c1-4915-8f30-2f92271d534d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.608571 kubelet[1916]: I0909 00:48:21.608436 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b6385f45-81c1-4915-8f30-2f92271d534d-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:48:21.834625 kubelet[1916]: I0909 00:48:21.833404 1916 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:48:21Z","lastTransitionTime":"2025-09-09T00:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:48:22.007895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32662267502634475a07149c0550e343939aee7a37307fcf661fedb74cb8e1e5-shm.mount: Deactivated successfully. Sep 9 00:48:22.008011 systemd[1]: var-lib-kubelet-pods-b6385f45\x2d81c1\x2d4915\x2d8f30\x2d2f92271d534d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtwhfj.mount: Deactivated successfully. Sep 9 00:48:22.008076 systemd[1]: var-lib-kubelet-pods-b6385f45\x2d81c1\x2d4915\x2d8f30\x2d2f92271d534d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:48:22.008130 systemd[1]: var-lib-kubelet-pods-b6385f45\x2d81c1\x2d4915\x2d8f30\x2d2f92271d534d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:48:22.008179 systemd[1]: var-lib-kubelet-pods-b6385f45\x2d81c1\x2d4915\x2d8f30\x2d2f92271d534d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 9 00:48:22.226744 systemd[1]: Removed slice kubepods-burstable-podb6385f45_81c1_4915_8f30_2f92271d534d.slice. Sep 9 00:48:22.412992 kubelet[1916]: I0909 00:48:22.412957 1916 scope.go:117] "RemoveContainer" containerID="87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec" Sep 9 00:48:22.414000 env[1215]: time="2025-09-09T00:48:22.413961714Z" level=info msg="RemoveContainer for \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\"" Sep 9 00:48:22.417211 env[1215]: time="2025-09-09T00:48:22.417152590Z" level=info msg="RemoveContainer for \"87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec\" returns successfully" Sep 9 00:48:22.448314 kubelet[1916]: I0909 00:48:22.448256 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="b6385f45-81c1-4915-8f30-2f92271d534d" containerName="mount-cgroup" Sep 9 00:48:22.453440 systemd[1]: Created slice kubepods-burstable-podd80bf593_54b4_41c1_bfbe_be8b0c286939.slice. Sep 9 00:48:22.512581 kubelet[1916]: I0909 00:48:22.512440 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-lib-modules\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.512581 kubelet[1916]: I0909 00:48:22.512504 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-host-proc-sys-kernel\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.512581 kubelet[1916]: I0909 00:48:22.512525 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-etc-cni-netd\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.512581 kubelet[1916]: I0909 00:48:22.512558 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-cilium-cgroup\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.512581 kubelet[1916]: I0909 00:48:22.512575 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-cni-path\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512592 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d80bf593-54b4-41c1-bfbe-be8b0c286939-clustermesh-secrets\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512609 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d80bf593-54b4-41c1-bfbe-be8b0c286939-cilium-ipsec-secrets\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512632 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-bpf-maps\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512650 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d80bf593-54b4-41c1-bfbe-be8b0c286939-hubble-tls\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512668 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4vh\" (UniqueName: \"kubernetes.io/projected/d80bf593-54b4-41c1-bfbe-be8b0c286939-kube-api-access-hv4vh\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512686 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-xtables-lock\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512708 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-host-proc-sys-net\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512727 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-cilium-run\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512743 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d80bf593-54b4-41c1-bfbe-be8b0c286939-cilium-config-path\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.513112 kubelet[1916]: I0909 00:48:22.512757 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d80bf593-54b4-41c1-bfbe-be8b0c286939-hostproc\") pod \"cilium-kdf7c\" (UID: \"d80bf593-54b4-41c1-bfbe-be8b0c286939\") " pod="kube-system/cilium-kdf7c" Sep 9 00:48:22.756285 kubelet[1916]: E0909 00:48:22.756227 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:22.757045 env[1215]: time="2025-09-09T00:48:22.756698557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdf7c,Uid:d80bf593-54b4-41c1-bfbe-be8b0c286939,Namespace:kube-system,Attempt:0,}" Sep 9 00:48:22.768823 env[1215]: time="2025-09-09T00:48:22.768667171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:22.768823 env[1215]: time="2025-09-09T00:48:22.768720052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:22.768823 env[1215]: time="2025-09-09T00:48:22.768730132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:22.768956 env[1215]: time="2025-09-09T00:48:22.768848413Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2 pid=3677 runtime=io.containerd.runc.v2 Sep 9 00:48:22.778052 systemd[1]: Started cri-containerd-62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2.scope. Sep 9 00:48:22.804785 env[1215]: time="2025-09-09T00:48:22.804744136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdf7c,Uid:d80bf593-54b4-41c1-bfbe-be8b0c286939,Namespace:kube-system,Attempt:0,} returns sandbox id \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\"" Sep 9 00:48:22.805448 kubelet[1916]: E0909 00:48:22.805421 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:22.807376 env[1215]: time="2025-09-09T00:48:22.807333125Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:48:22.816939 env[1215]: time="2025-09-09T00:48:22.816895752Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3\"" Sep 9 00:48:22.817361 env[1215]: time="2025-09-09T00:48:22.817330437Z" level=info msg="StartContainer for \"ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3\"" Sep 9 00:48:22.833122 systemd[1]: Started cri-containerd-ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3.scope. Sep 9 00:48:22.863724 env[1215]: time="2025-09-09T00:48:22.863675556Z" level=info msg="StartContainer for \"ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3\" returns successfully" Sep 9 00:48:22.868988 systemd[1]: cri-containerd-ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3.scope: Deactivated successfully. Sep 9 00:48:22.890333 env[1215]: time="2025-09-09T00:48:22.890292735Z" level=info msg="shim disconnected" id=ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3 Sep 9 00:48:22.890627 env[1215]: time="2025-09-09T00:48:22.890589738Z" level=warning msg="cleaning up after shim disconnected" id=ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3 namespace=k8s.io Sep 9 00:48:22.890627 env[1215]: time="2025-09-09T00:48:22.890615458Z" level=info msg="cleaning up dead shim" Sep 9 00:48:22.896342 env[1215]: time="2025-09-09T00:48:22.896310802Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3762 runtime=io.containerd.runc.v2\n" Sep 9 00:48:23.221772 kubelet[1916]: E0909 00:48:23.221744 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:23.418897 kubelet[1916]: E0909 00:48:23.418866 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:23.421706 env[1215]: time="2025-09-09T00:48:23.421652366Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:48:23.432056 env[1215]: time="2025-09-09T00:48:23.431975927Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7\"" Sep 9 00:48:23.432633 env[1215]: time="2025-09-09T00:48:23.432607375Z" level=info msg="StartContainer for \"4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7\"" Sep 9 00:48:23.453639 systemd[1]: Started cri-containerd-4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7.scope. Sep 9 00:48:23.487813 env[1215]: time="2025-09-09T00:48:23.487705703Z" level=info msg="StartContainer for \"4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7\" returns successfully" Sep 9 00:48:23.492043 systemd[1]: cri-containerd-4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7.scope: Deactivated successfully. Sep 9 00:48:23.514710 env[1215]: time="2025-09-09T00:48:23.514662861Z" level=info msg="shim disconnected" id=4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7 Sep 9 00:48:23.514710 env[1215]: time="2025-09-09T00:48:23.514709381Z" level=warning msg="cleaning up after shim disconnected" id=4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7 namespace=k8s.io Sep 9 00:48:23.514928 env[1215]: time="2025-09-09T00:48:23.514718221Z" level=info msg="cleaning up dead shim" Sep 9 00:48:23.523146 env[1215]: time="2025-09-09T00:48:23.523105720Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3823 runtime=io.containerd.runc.v2\n" Sep 9 00:48:24.008120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7-rootfs.mount: Deactivated successfully. Sep 9 00:48:24.223008 kubelet[1916]: I0909 00:48:24.222741 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6385f45-81c1-4915-8f30-2f92271d534d" path="/var/lib/kubelet/pods/b6385f45-81c1-4915-8f30-2f92271d534d/volumes" Sep 9 00:48:24.270665 kubelet[1916]: W0909 00:48:24.270575 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6385f45_81c1_4915_8f30_2f92271d534d.slice/cri-containerd-87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec.scope WatchSource:0}: container "87ab8ff2194e3daba4e54b893635125a7c8cd4284e03814e35c3ad1e38b9cfec" in namespace "k8s.io": not found Sep 9 00:48:24.422322 kubelet[1916]: E0909 00:48:24.422147 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:24.423767 env[1215]: time="2025-09-09T00:48:24.423716666Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:48:24.434978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119504732.mount: Deactivated successfully. Sep 9 00:48:24.440569 env[1215]: time="2025-09-09T00:48:24.440523393Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a\"" Sep 9 00:48:24.446743 env[1215]: time="2025-09-09T00:48:24.446276384Z" level=info msg="StartContainer for \"601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a\"" Sep 9 00:48:24.464499 systemd[1]: Started cri-containerd-601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a.scope. Sep 9 00:48:24.497434 env[1215]: time="2025-09-09T00:48:24.497398453Z" level=info msg="StartContainer for \"601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a\" returns successfully" Sep 9 00:48:24.497439 systemd[1]: cri-containerd-601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a.scope: Deactivated successfully. Sep 9 00:48:24.519193 env[1215]: time="2025-09-09T00:48:24.519149041Z" level=info msg="shim disconnected" id=601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a Sep 9 00:48:24.519426 env[1215]: time="2025-09-09T00:48:24.519407124Z" level=warning msg="cleaning up after shim disconnected" id=601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a namespace=k8s.io Sep 9 00:48:24.519534 env[1215]: time="2025-09-09T00:48:24.519518085Z" level=info msg="cleaning up dead shim" Sep 9 00:48:24.525931 env[1215]: time="2025-09-09T00:48:24.525842003Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3879 runtime=io.containerd.runc.v2\n" Sep 9 00:48:25.008176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a-rootfs.mount: Deactivated successfully. Sep 9 00:48:25.270289 kubelet[1916]: E0909 00:48:25.270179 1916 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:48:25.425925 kubelet[1916]: E0909 00:48:25.425879 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:25.428403 env[1215]: time="2025-09-09T00:48:25.428361734Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:48:25.438430 env[1215]: time="2025-09-09T00:48:25.438370263Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c\"" Sep 9 00:48:25.438850 env[1215]: time="2025-09-09T00:48:25.438809588Z" level=info msg="StartContainer for \"1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c\"" Sep 9 00:48:25.456282 systemd[1]: Started cri-containerd-1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c.scope. Sep 9 00:48:25.485003 systemd[1]: cri-containerd-1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c.scope: Deactivated successfully. Sep 9 00:48:25.485738 env[1215]: time="2025-09-09T00:48:25.485700150Z" level=info msg="StartContainer for \"1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c\" returns successfully" Sep 9 00:48:25.504406 env[1215]: time="2025-09-09T00:48:25.504358429Z" level=info msg="shim disconnected" id=1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c Sep 9 00:48:25.504406 env[1215]: time="2025-09-09T00:48:25.504401790Z" level=warning msg="cleaning up after shim disconnected" id=1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c namespace=k8s.io Sep 9 00:48:25.504406 env[1215]: time="2025-09-09T00:48:25.504411150Z" level=info msg="cleaning up dead shim" Sep 9 00:48:25.510363 env[1215]: time="2025-09-09T00:48:25.510332466Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3935 runtime=io.containerd.runc.v2\n" Sep 9 00:48:26.008242 systemd[1]: run-containerd-runc-k8s.io-1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c-runc.Oo0laD.mount: Deactivated successfully. Sep 9 00:48:26.008348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c-rootfs.mount: Deactivated successfully. Sep 9 00:48:26.429647 kubelet[1916]: E0909 00:48:26.429614 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:26.432924 env[1215]: time="2025-09-09T00:48:26.432074430Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:48:26.445504 env[1215]: time="2025-09-09T00:48:26.445449249Z" level=info msg="CreateContainer within sandbox \"62001aef054528124da01036d23a709323f3287b84cd57ecf7520b6220d105d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb\"" Sep 9 00:48:26.446164 env[1215]: time="2025-09-09T00:48:26.446135378Z" level=info msg="StartContainer for \"f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb\"" Sep 9 00:48:26.464539 systemd[1]: Started cri-containerd-f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb.scope. Sep 9 00:48:26.495093 env[1215]: time="2025-09-09T00:48:26.495035150Z" level=info msg="StartContainer for \"f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb\" returns successfully" Sep 9 00:48:26.733485 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 9 00:48:27.008366 systemd[1]: run-containerd-runc-k8s.io-f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb-runc.tkPd81.mount: Deactivated successfully. Sep 9 00:48:27.382161 kubelet[1916]: W0909 00:48:27.382037 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80bf593_54b4_41c1_bfbe_be8b0c286939.slice/cri-containerd-ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3.scope WatchSource:0}: task ee6b68d79901c36232440dbd9b907696f300c6ef716af0c86f53c799ad8c19f3 not found: not found Sep 9 00:48:27.434111 kubelet[1916]: E0909 00:48:27.434073 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:27.452322 kubelet[1916]: I0909 00:48:27.452262 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kdf7c" podStartSLOduration=5.452244938 podStartE2EDuration="5.452244938s" podCreationTimestamp="2025-09-09 00:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:27.450540994 +0000 UTC m=+97.316528616" watchObservedRunningTime="2025-09-09 00:48:27.452244938 +0000 UTC m=+97.318232560" Sep 9 00:48:28.221376 kubelet[1916]: E0909 00:48:28.221344 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:28.757194 kubelet[1916]: E0909 00:48:28.757154 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:29.343873 systemd[1]: run-containerd-runc-k8s.io-f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb-runc.a0DJwW.mount: Deactivated successfully. Sep 9 00:48:29.370279 systemd-networkd[1040]: lxc_health: Link UP Sep 9 00:48:29.379495 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:48:29.382851 systemd-networkd[1040]: lxc_health: Gained carrier Sep 9 00:48:30.488890 kubelet[1916]: W0909 00:48:30.488849 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80bf593_54b4_41c1_bfbe_be8b0c286939.slice/cri-containerd-4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7.scope WatchSource:0}: task 4836e4518004de45e70b4453bb19b4777471d7031a5c8afb2bb6bc36e26757d7 not found: not found Sep 9 00:48:30.758529 kubelet[1916]: E0909 00:48:30.758411 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:31.336597 systemd-networkd[1040]: lxc_health: Gained IPv6LL Sep 9 00:48:31.440313 kubelet[1916]: E0909 00:48:31.440271 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:32.441989 kubelet[1916]: E0909 00:48:32.441949 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:33.594989 kubelet[1916]: W0909 00:48:33.594944 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80bf593_54b4_41c1_bfbe_be8b0c286939.slice/cri-containerd-601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a.scope WatchSource:0}: task 601c746e279448e6e7fba01445e9ca8848d4ad17f20f715e728566f6e3e0a38a not found: not found Sep 9 00:48:33.616117 systemd[1]: run-containerd-runc-k8s.io-f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb-runc.XAh9Dp.mount: Deactivated successfully. Sep 9 00:48:35.737696 systemd[1]: run-containerd-runc-k8s.io-f78f56d436605f743e319cd56499de8e958f430f3aa5aa43051382917701ffeb-runc.hGphbf.mount: Deactivated successfully. Sep 9 00:48:35.787634 sshd[3542]: pam_unix(sshd:session): session closed for user core Sep 9 00:48:35.789957 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:48:35.790490 systemd-logind[1202]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:48:35.790635 systemd[1]: sshd@26-10.0.0.137:22-10.0.0.1:49930.service: Deactivated successfully. Sep 9 00:48:35.791624 systemd-logind[1202]: Removed session 27. Sep 9 00:48:36.702879 kubelet[1916]: W0909 00:48:36.702825 1916 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80bf593_54b4_41c1_bfbe_be8b0c286939.slice/cri-containerd-1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c.scope WatchSource:0}: task 1a6a4b5b810ff7a4868b4331bb8737fe5d46b2ebf1dfbdbf032d5f491481d91c not found: not found