Sep 9 00:24:53.692134 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:24:53.692155 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Sep 8 23:23:23 -00 2025 Sep 9 00:24:53.692163 kernel: efi: EFI v2.70 by EDK II Sep 9 00:24:53.692169 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 9 00:24:53.692175 kernel: random: crng init done Sep 9 00:24:53.692181 kernel: ACPI: Early table checksum verification disabled Sep 9 00:24:53.692188 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 9 00:24:53.692195 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:24:53.692201 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692207 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692213 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692218 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692224 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692230 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692238 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692244 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692251 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:24:53.692256 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:24:53.692263 kernel: NUMA: Failed to initialise from firmware Sep 9 00:24:53.692269 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:24:53.692275 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 9 00:24:53.692281 kernel: Zone ranges: Sep 9 00:24:53.692287 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:24:53.692294 kernel: DMA32 empty Sep 9 00:24:53.692300 kernel: Normal empty Sep 9 00:24:53.692306 kernel: Movable zone start for each node Sep 9 00:24:53.692312 kernel: Early memory node ranges Sep 9 00:24:53.692317 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 9 00:24:53.692323 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 9 00:24:53.692329 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 9 00:24:53.692335 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 9 00:24:53.692341 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 9 00:24:53.692347 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 9 00:24:53.692353 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 9 00:24:53.692359 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:24:53.692367 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:24:53.692373 kernel: psci: probing for conduit method from ACPI. Sep 9 00:24:53.692378 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:24:53.692384 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:24:53.692390 kernel: psci: Trusted OS migration not required Sep 9 00:24:53.692399 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:24:53.692405 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:24:53.692412 kernel: ACPI: SRAT not present Sep 9 00:24:53.692419 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 9 00:24:53.692426 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 9 00:24:53.692433 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:24:53.692439 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:24:53.692446 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:24:53.692452 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:24:53.692459 kernel: CPU features: detected: Spectre-v4 Sep 9 00:24:53.692465 kernel: CPU features: detected: Spectre-BHB Sep 9 00:24:53.692473 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:24:53.692479 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:24:53.692486 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:24:53.692492 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:24:53.692499 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:24:53.692511 kernel: Policy zone: DMA Sep 9 00:24:53.692518 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:24:53.692525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:24:53.692536 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:24:53.692543 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:24:53.692550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:24:53.692558 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 9 00:24:53.692565 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:24:53.692571 kernel: trace event string verifier disabled Sep 9 00:24:53.692577 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:24:53.692584 kernel: rcu: RCU event tracing is enabled. Sep 9 00:24:53.692590 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:24:53.692597 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:24:53.692603 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:24:53.692610 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:24:53.692617 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:24:53.692624 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:24:53.692632 kernel: GICv3: 256 SPIs implemented Sep 9 00:24:53.692638 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:24:53.692644 kernel: GICv3: Distributor has no Range Selector support Sep 9 00:24:53.692651 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:24:53.692657 kernel: GICv3: 16 PPIs implemented Sep 9 00:24:53.692664 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:24:53.692670 kernel: ACPI: SRAT not present Sep 9 00:24:53.692676 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:24:53.692683 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:24:53.692690 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:24:53.692696 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 9 00:24:53.692702 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 9 00:24:53.692710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:24:53.692717 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:24:53.692723 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:24:53.692730 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:24:53.692739 kernel: arm-pv: using stolen time PV Sep 9 00:24:53.692746 kernel: Console: colour dummy device 80x25 Sep 9 00:24:53.692752 kernel: ACPI: Core revision 20210730 Sep 9 00:24:53.692759 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:24:53.692766 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:24:53.692773 kernel: LSM: Security Framework initializing Sep 9 00:24:53.692781 kernel: SELinux: Initializing. Sep 9 00:24:53.692787 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:24:53.692794 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:24:53.692800 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:24:53.692807 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:24:53.692813 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:24:53.692820 kernel: Remapping and enabling EFI services. Sep 9 00:24:53.692826 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:24:53.692832 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:24:53.692840 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:24:53.692846 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 9 00:24:53.692853 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:24:53.692859 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:24:53.692866 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:24:53.692872 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:24:53.692879 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 9 00:24:53.692886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:24:53.692892 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:24:53.692898 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:24:53.692907 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:24:53.692913 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 9 00:24:53.692920 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:24:53.692926 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:24:53.692937 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:24:53.692946 kernel: SMP: Total of 4 processors activated. Sep 9 00:24:53.692953 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:24:53.693437 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:24:53.693449 kernel: CPU features: detected: Common not Private translations Sep 9 00:24:53.693456 kernel: CPU features: detected: CRC32 instructions Sep 9 00:24:53.693463 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:24:53.693470 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:24:53.693482 kernel: CPU features: detected: Privileged Access Never Sep 9 00:24:53.693489 kernel: CPU features: detected: RAS Extension Support Sep 9 00:24:53.693496 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:24:53.693502 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:24:53.693509 kernel: alternatives: patching kernel code Sep 9 00:24:53.693517 kernel: devtmpfs: initialized Sep 9 00:24:53.693524 kernel: KASLR enabled Sep 9 00:24:53.693541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:24:53.693550 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:24:53.693557 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:24:53.693564 kernel: SMBIOS 3.0.0 present. Sep 9 00:24:53.693570 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 9 00:24:53.693577 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:24:53.693584 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:24:53.693593 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:24:53.693600 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:24:53.693607 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:24:53.693614 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Sep 9 00:24:53.693621 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:24:53.693627 kernel: cpuidle: using governor menu Sep 9 00:24:53.693634 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:24:53.693641 kernel: ASID allocator initialised with 32768 entries Sep 9 00:24:53.693648 kernel: ACPI: bus type PCI registered Sep 9 00:24:53.693656 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:24:53.693663 kernel: Serial: AMBA PL011 UART driver Sep 9 00:24:53.693670 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:24:53.693677 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:24:53.693684 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:24:53.693691 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:24:53.693698 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:24:53.693705 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:24:53.693712 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:24:53.693720 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:24:53.693727 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:24:53.693734 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 9 00:24:53.693741 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 9 00:24:53.693749 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 9 00:24:53.693756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:24:53.693763 kernel: ACPI: Interpreter enabled Sep 9 00:24:53.693769 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:24:53.693776 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:24:53.693785 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:24:53.693792 kernel: printk: console [ttyAMA0] enabled Sep 9 00:24:53.693799 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:24:53.693936 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:24:53.694022 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:24:53.694087 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:24:53.694177 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:24:53.694248 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:24:53.694258 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:24:53.694266 kernel: PCI host bridge to bus 0000:00 Sep 9 00:24:53.694339 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:24:53.694401 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:24:53.694456 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:24:53.694512 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:24:53.694615 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:24:53.694695 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:24:53.694762 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:24:53.694829 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:24:53.694893 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:24:53.694968 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:24:53.695033 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:24:53.695101 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:24:53.695159 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:24:53.695217 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:24:53.695274 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:24:53.695284 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:24:53.695291 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:24:53.695298 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:24:53.695305 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:24:53.695314 kernel: iommu: Default domain type: Translated Sep 9 00:24:53.695321 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:24:53.695328 kernel: vgaarb: loaded Sep 9 00:24:53.695335 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 00:24:53.695342 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 00:24:53.695349 kernel: PTP clock support registered Sep 9 00:24:53.695356 kernel: Registered efivars operations Sep 9 00:24:53.695363 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:24:53.695370 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:24:53.695378 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:24:53.695385 kernel: pnp: PnP ACPI init Sep 9 00:24:53.695497 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:24:53.695510 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:24:53.695517 kernel: NET: Registered PF_INET protocol family Sep 9 00:24:53.695525 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:24:53.695540 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:24:53.695548 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:24:53.695558 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:24:53.695566 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 9 00:24:53.695573 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:24:53.695580 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:24:53.695588 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:24:53.695595 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:24:53.695602 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:24:53.695609 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:24:53.695616 kernel: kvm [1]: HYP mode not available Sep 9 00:24:53.695624 kernel: Initialise system trusted keyrings Sep 9 00:24:53.695631 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:24:53.695638 kernel: Key type asymmetric registered Sep 9 00:24:53.695645 kernel: Asymmetric key parser 'x509' registered Sep 9 00:24:53.695652 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 00:24:53.695659 kernel: io scheduler mq-deadline registered Sep 9 00:24:53.695666 kernel: io scheduler kyber registered Sep 9 00:24:53.695673 kernel: io scheduler bfq registered Sep 9 00:24:53.695680 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:24:53.695689 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:24:53.695696 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:24:53.696062 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:24:53.696080 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:24:53.696087 kernel: thunder_xcv, ver 1.0 Sep 9 00:24:53.696094 kernel: thunder_bgx, ver 1.0 Sep 9 00:24:53.696101 kernel: nicpf, ver 1.0 Sep 9 00:24:53.696108 kernel: nicvf, ver 1.0 Sep 9 00:24:53.703465 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:24:53.703576 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:24:53 UTC (1757377493) Sep 9 00:24:53.703587 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:24:53.703595 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:24:53.703602 kernel: Segment Routing with IPv6 Sep 9 00:24:53.703609 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:24:53.703616 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:24:53.703623 kernel: Key type dns_resolver registered Sep 9 00:24:53.703631 kernel: registered taskstats version 1 Sep 9 00:24:53.703640 kernel: Loading compiled-in X.509 certificates Sep 9 00:24:53.703647 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 14b3f28443a1a4b809c7c0337ab8c3dc8fdb5252' Sep 9 00:24:53.703654 kernel: Key type .fscrypt registered Sep 9 00:24:53.703661 kernel: Key type fscrypt-provisioning registered Sep 9 00:24:53.703667 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:24:53.703674 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:24:53.703681 kernel: ima: No architecture policies found Sep 9 00:24:53.703688 kernel: clk: Disabling unused clocks Sep 9 00:24:53.703695 kernel: Freeing unused kernel memory: 36416K Sep 9 00:24:53.703703 kernel: Run /init as init process Sep 9 00:24:53.703709 kernel: with arguments: Sep 9 00:24:53.703716 kernel: /init Sep 9 00:24:53.703723 kernel: with environment: Sep 9 00:24:53.703730 kernel: HOME=/ Sep 9 00:24:53.703736 kernel: TERM=linux Sep 9 00:24:53.703743 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:24:53.703752 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:24:53.703762 systemd[1]: Detected virtualization kvm. Sep 9 00:24:53.703769 systemd[1]: Detected architecture arm64. Sep 9 00:24:53.703777 systemd[1]: Running in initrd. Sep 9 00:24:53.703784 systemd[1]: No hostname configured, using default hostname. Sep 9 00:24:53.703791 systemd[1]: Hostname set to . Sep 9 00:24:53.703798 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:24:53.703806 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:24:53.703813 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:24:53.703821 systemd[1]: Reached target cryptsetup.target. Sep 9 00:24:53.703829 systemd[1]: Reached target paths.target. Sep 9 00:24:53.703836 systemd[1]: Reached target slices.target. Sep 9 00:24:53.703843 systemd[1]: Reached target swap.target. Sep 9 00:24:53.703850 systemd[1]: Reached target timers.target. Sep 9 00:24:53.703857 systemd[1]: Listening on iscsid.socket. Sep 9 00:24:53.703865 systemd[1]: Listening on iscsiuio.socket. Sep 9 00:24:53.703873 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:24:53.703881 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:24:53.703888 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:24:53.703895 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:24:53.703902 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:24:53.703909 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:24:53.703917 systemd[1]: Reached target sockets.target. Sep 9 00:24:53.703924 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:24:53.703931 systemd[1]: Finished network-cleanup.service. Sep 9 00:24:53.703939 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:24:53.703947 systemd[1]: Starting systemd-journald.service... Sep 9 00:24:53.703954 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:24:53.703975 systemd[1]: Starting systemd-resolved.service... Sep 9 00:24:53.703982 systemd[1]: Starting systemd-vconsole-setup.service... Sep 9 00:24:53.703989 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:24:53.703996 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:24:53.704004 kernel: audit: type=1130 audit(1757377493.692:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.704012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:24:53.704021 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:24:53.704028 kernel: audit: type=1130 audit(1757377493.702:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.704038 systemd-journald[291]: Journal started Sep 9 00:24:53.704091 systemd-journald[291]: Runtime Journal (/run/log/journal/3f7792760be049928cc9761df70a2a7e) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:24:53.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.704710 systemd-modules-load[292]: Inserted module 'overlay' Sep 9 00:24:53.705977 systemd[1]: Started systemd-journald.service. Sep 9 00:24:53.706656 systemd[1]: Finished systemd-vconsole-setup.service. Sep 9 00:24:53.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.711929 kernel: audit: type=1130 audit(1757377493.706:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.711971 kernel: audit: type=1130 audit(1757377493.707:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.708182 systemd[1]: Starting dracut-cmdline-ask.service... Sep 9 00:24:53.722831 systemd-resolved[293]: Positive Trust Anchors: Sep 9 00:24:53.722844 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:24:53.722872 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:24:53.731697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:24:53.731719 kernel: Bridge firewalling registered Sep 9 00:24:53.727544 systemd-resolved[293]: Defaulting to hostname 'linux'. Sep 9 00:24:53.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.729299 systemd[1]: Started systemd-resolved.service. Sep 9 00:24:53.736735 kernel: audit: type=1130 audit(1757377493.732:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.730788 systemd-modules-load[292]: Inserted module 'br_netfilter' Sep 9 00:24:53.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.733026 systemd[1]: Reached target nss-lookup.target. Sep 9 00:24:53.742108 kernel: audit: type=1130 audit(1757377493.737:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.742129 kernel: SCSI subsystem initialized Sep 9 00:24:53.736342 systemd[1]: Finished dracut-cmdline-ask.service. Sep 9 00:24:53.738197 systemd[1]: Starting dracut-cmdline.service... Sep 9 00:24:53.748295 dracut-cmdline[308]: dracut-dracut-053 Sep 9 00:24:53.750098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:24:53.750118 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:24:53.750128 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 9 00:24:53.752138 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:24:53.756042 systemd-modules-load[292]: Inserted module 'dm_multipath' Sep 9 00:24:53.756858 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:24:53.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.758676 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:24:53.761569 kernel: audit: type=1130 audit(1757377493.756:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.767611 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:24:53.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.771994 kernel: audit: type=1130 audit(1757377493.767:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.814980 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:24:53.826974 kernel: iscsi: registered transport (tcp) Sep 9 00:24:53.841991 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:24:53.842008 kernel: QLogic iSCSI HBA Driver Sep 9 00:24:53.875489 systemd[1]: Finished dracut-cmdline.service. Sep 9 00:24:53.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.878984 kernel: audit: type=1130 audit(1757377493.875:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:53.877184 systemd[1]: Starting dracut-pre-udev.service... Sep 9 00:24:53.918986 kernel: raid6: neonx8 gen() 13729 MB/s Sep 9 00:24:53.935971 kernel: raid6: neonx8 xor() 10813 MB/s Sep 9 00:24:53.952973 kernel: raid6: neonx4 gen() 13539 MB/s Sep 9 00:24:53.969975 kernel: raid6: neonx4 xor() 11044 MB/s Sep 9 00:24:53.986970 kernel: raid6: neonx2 gen() 12944 MB/s Sep 9 00:24:54.003969 kernel: raid6: neonx2 xor() 10234 MB/s Sep 9 00:24:54.020972 kernel: raid6: neonx1 gen() 10543 MB/s Sep 9 00:24:54.037972 kernel: raid6: neonx1 xor() 8770 MB/s Sep 9 00:24:54.054969 kernel: raid6: int64x8 gen() 6269 MB/s Sep 9 00:24:54.071974 kernel: raid6: int64x8 xor() 3539 MB/s Sep 9 00:24:54.088970 kernel: raid6: int64x4 gen() 7208 MB/s Sep 9 00:24:54.105970 kernel: raid6: int64x4 xor() 3856 MB/s Sep 9 00:24:54.122976 kernel: raid6: int64x2 gen() 6150 MB/s Sep 9 00:24:54.139969 kernel: raid6: int64x2 xor() 3318 MB/s Sep 9 00:24:54.156970 kernel: raid6: int64x1 gen() 5039 MB/s Sep 9 00:24:54.174271 kernel: raid6: int64x1 xor() 2644 MB/s Sep 9 00:24:54.174286 kernel: raid6: using algorithm neonx8 gen() 13729 MB/s Sep 9 00:24:54.174295 kernel: raid6: .... xor() 10813 MB/s, rmw enabled Sep 9 00:24:54.174303 kernel: raid6: using neon recovery algorithm Sep 9 00:24:54.185081 kernel: xor: measuring software checksum speed Sep 9 00:24:54.185103 kernel: 8regs : 16842 MB/sec Sep 9 00:24:54.186162 kernel: 32regs : 20702 MB/sec Sep 9 00:24:54.186174 kernel: arm64_neon : 25276 MB/sec Sep 9 00:24:54.186183 kernel: xor: using function: arm64_neon (25276 MB/sec) Sep 9 00:24:54.238988 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 9 00:24:54.249338 systemd[1]: Finished dracut-pre-udev.service. Sep 9 00:24:54.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:54.249000 audit: BPF prog-id=7 op=LOAD Sep 9 00:24:54.249000 audit: BPF prog-id=8 op=LOAD Sep 9 00:24:54.251017 systemd[1]: Starting systemd-udevd.service... Sep 9 00:24:54.263023 systemd-udevd[493]: Using default interface naming scheme 'v252'. Sep 9 00:24:54.266966 systemd[1]: Started systemd-udevd.service. Sep 9 00:24:54.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:54.268373 systemd[1]: Starting dracut-pre-trigger.service... Sep 9 00:24:54.279779 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Sep 9 00:24:54.308547 systemd[1]: Finished dracut-pre-trigger.service. Sep 9 00:24:54.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:54.310016 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:24:54.345496 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:24:54.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:54.377618 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:24:54.380773 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:24:54.380787 kernel: GPT:9289727 != 19775487 Sep 9 00:24:54.380802 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:24:54.380810 kernel: GPT:9289727 != 19775487 Sep 9 00:24:54.380818 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:24:54.380826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:24:54.401992 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (550) Sep 9 00:24:54.403221 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 9 00:24:54.404123 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 9 00:24:54.408439 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 9 00:24:54.411650 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 9 00:24:54.417433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:24:54.420846 systemd[1]: Starting disk-uuid.service... Sep 9 00:24:54.426752 disk-uuid[561]: Primary Header is updated. Sep 9 00:24:54.426752 disk-uuid[561]: Secondary Entries is updated. Sep 9 00:24:54.426752 disk-uuid[561]: Secondary Header is updated. Sep 9 00:24:54.429519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:24:55.434990 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:24:55.435082 disk-uuid[562]: The operation has completed successfully. Sep 9 00:24:55.457197 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:24:55.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.457302 systemd[1]: Finished disk-uuid.service. Sep 9 00:24:55.461206 systemd[1]: Starting verity-setup.service... Sep 9 00:24:55.473978 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:24:55.493816 systemd[1]: Found device dev-mapper-usr.device. Sep 9 00:24:55.496011 systemd[1]: Mounting sysusr-usr.mount... Sep 9 00:24:55.498253 systemd[1]: Finished verity-setup.service. Sep 9 00:24:55.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.542984 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 9 00:24:55.543089 systemd[1]: Mounted sysusr-usr.mount. Sep 9 00:24:55.543836 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 9 00:24:55.544637 systemd[1]: Starting ignition-setup.service... Sep 9 00:24:55.546449 systemd[1]: Starting parse-ip-for-networkd.service... Sep 9 00:24:55.553362 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:24:55.553408 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:24:55.553418 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:24:55.562545 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:24:55.570588 systemd[1]: Finished ignition-setup.service. Sep 9 00:24:55.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.572367 systemd[1]: Starting ignition-fetch-offline.service... Sep 9 00:24:55.629534 ignition[649]: Ignition 2.14.0 Sep 9 00:24:55.629545 ignition[649]: Stage: fetch-offline Sep 9 00:24:55.629591 ignition[649]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:24:55.629600 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:24:55.629739 ignition[649]: parsed url from cmdline: "" Sep 9 00:24:55.629742 ignition[649]: no config URL provided Sep 9 00:24:55.629747 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:24:55.629754 ignition[649]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:24:55.629773 ignition[649]: op(1): [started] loading QEMU firmware config module Sep 9 00:24:55.629777 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:24:55.636260 ignition[649]: op(1): [finished] loading QEMU firmware config module Sep 9 00:24:55.642637 systemd[1]: Finished parse-ip-for-networkd.service. Sep 9 00:24:55.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.643000 audit: BPF prog-id=9 op=LOAD Sep 9 00:24:55.644890 systemd[1]: Starting systemd-networkd.service... Sep 9 00:24:55.667790 systemd-networkd[739]: lo: Link UP Sep 9 00:24:55.667802 systemd-networkd[739]: lo: Gained carrier Sep 9 00:24:55.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.668202 systemd-networkd[739]: Enumeration completed Sep 9 00:24:55.668289 systemd[1]: Started systemd-networkd.service. Sep 9 00:24:55.668387 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:24:55.669335 systemd-networkd[739]: eth0: Link UP Sep 9 00:24:55.669338 systemd-networkd[739]: eth0: Gained carrier Sep 9 00:24:55.669615 systemd[1]: Reached target network.target. Sep 9 00:24:55.671537 systemd[1]: Starting iscsiuio.service... Sep 9 00:24:55.678476 systemd[1]: Started iscsiuio.service. Sep 9 00:24:55.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.680376 systemd[1]: Starting iscsid.service... Sep 9 00:24:55.683955 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:24:55.683955 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 9 00:24:55.683955 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 9 00:24:55.683955 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 9 00:24:55.683955 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:24:55.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.686708 systemd[1]: Started iscsid.service. Sep 9 00:24:55.696138 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 9 00:24:55.691420 ignition[649]: parsing config with SHA512: eb8a05846c1efb19bdfd1134fa94b559953c242e9aa2ffb51d2c09f47d92ceded8310a1f998b3007295865dd9425d74ac4f97ab47565f002295eafcd788dd12e Sep 9 00:24:55.686743 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:24:55.692546 systemd[1]: Starting dracut-initqueue.service... Sep 9 00:24:55.703590 unknown[649]: fetched base config from "system" Sep 9 00:24:55.704074 ignition[649]: fetch-offline: fetch-offline passed Sep 9 00:24:55.703597 unknown[649]: fetched user config from "qemu" Sep 9 00:24:55.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.704129 ignition[649]: Ignition finished successfully Sep 9 00:24:55.704421 systemd[1]: Finished dracut-initqueue.service. Sep 9 00:24:55.705952 systemd[1]: Finished ignition-fetch-offline.service. Sep 9 00:24:55.707295 systemd[1]: Reached target remote-fs-pre.target. Sep 9 00:24:55.708500 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:24:55.709850 systemd[1]: Reached target remote-fs.target. Sep 9 00:24:55.712025 systemd[1]: Starting dracut-pre-mount.service... Sep 9 00:24:55.713430 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:24:55.714280 systemd[1]: Starting ignition-kargs.service... Sep 9 00:24:55.720858 systemd[1]: Finished dracut-pre-mount.service. Sep 9 00:24:55.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.723614 ignition[755]: Ignition 2.14.0 Sep 9 00:24:55.723625 ignition[755]: Stage: kargs Sep 9 00:24:55.723722 ignition[755]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:24:55.723731 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:24:55.726002 systemd[1]: Finished ignition-kargs.service. Sep 9 00:24:55.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.724811 ignition[755]: kargs: kargs passed Sep 9 00:24:55.724852 ignition[755]: Ignition finished successfully Sep 9 00:24:55.728477 systemd[1]: Starting ignition-disks.service... Sep 9 00:24:55.735487 ignition[766]: Ignition 2.14.0 Sep 9 00:24:55.735496 ignition[766]: Stage: disks Sep 9 00:24:55.735599 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:24:55.737529 systemd[1]: Finished ignition-disks.service. Sep 9 00:24:55.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.735609 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:24:55.739009 systemd[1]: Reached target initrd-root-device.target. Sep 9 00:24:55.736467 ignition[766]: disks: disks passed Sep 9 00:24:55.740220 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:24:55.736506 ignition[766]: Ignition finished successfully Sep 9 00:24:55.741667 systemd[1]: Reached target local-fs.target. Sep 9 00:24:55.742940 systemd[1]: Reached target sysinit.target. Sep 9 00:24:55.744065 systemd[1]: Reached target basic.target. Sep 9 00:24:55.746215 systemd[1]: Starting systemd-fsck-root.service... Sep 9 00:24:55.758113 systemd-fsck[774]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 9 00:24:55.765293 systemd[1]: Finished systemd-fsck-root.service. Sep 9 00:24:55.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.768728 systemd[1]: Mounting sysroot.mount... Sep 9 00:24:55.774978 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 9 00:24:55.775737 systemd[1]: Mounted sysroot.mount. Sep 9 00:24:55.776716 systemd[1]: Reached target initrd-root-fs.target. Sep 9 00:24:55.778839 systemd[1]: Mounting sysroot-usr.mount... Sep 9 00:24:55.779829 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 9 00:24:55.779878 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:24:55.779901 systemd[1]: Reached target ignition-diskful.target. Sep 9 00:24:55.781798 systemd[1]: Mounted sysroot-usr.mount. Sep 9 00:24:55.784602 systemd[1]: Starting initrd-setup-root.service... Sep 9 00:24:55.789227 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:24:55.793548 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:24:55.796535 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:24:55.800747 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:24:55.829307 systemd[1]: Finished initrd-setup-root.service. Sep 9 00:24:55.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.831034 systemd[1]: Starting ignition-mount.service... Sep 9 00:24:55.832469 systemd[1]: Starting sysroot-boot.service... Sep 9 00:24:55.836434 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Sep 9 00:24:55.846512 ignition[827]: INFO : Ignition 2.14.0 Sep 9 00:24:55.846512 ignition[827]: INFO : Stage: mount Sep 9 00:24:55.848828 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:24:55.848828 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:24:55.848828 ignition[827]: INFO : mount: mount passed Sep 9 00:24:55.848828 ignition[827]: INFO : Ignition finished successfully Sep 9 00:24:55.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:55.849289 systemd[1]: Finished sysroot-boot.service. Sep 9 00:24:55.850635 systemd[1]: Finished ignition-mount.service. Sep 9 00:24:55.916548 systemd-resolved[293]: Detected conflict on linux IN A 10.0.0.34 Sep 9 00:24:55.916563 systemd-resolved[293]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 9 00:24:56.505060 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 9 00:24:56.510980 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (835) Sep 9 00:24:56.513400 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:24:56.513458 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:24:56.513469 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:24:56.516457 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 9 00:24:56.518035 systemd[1]: Starting ignition-files.service... Sep 9 00:24:56.532738 ignition[855]: INFO : Ignition 2.14.0 Sep 9 00:24:56.532738 ignition[855]: INFO : Stage: files Sep 9 00:24:56.534600 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:24:56.534600 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:24:56.534600 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:24:56.538449 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:24:56.538449 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:24:56.543358 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:24:56.544942 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:24:56.546321 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:24:56.545345 unknown[855]: wrote ssh authorized keys file for user: core Sep 9 00:24:56.549135 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 00:24:56.549135 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 00:24:56.596432 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:24:57.031106 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 00:24:57.033126 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:24:57.033126 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 00:24:57.261875 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:24:57.320346 systemd-networkd[739]: eth0: Gained IPv6LL Sep 9 00:24:57.396776 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:24:57.398457 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:24:57.412929 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:24:57.412929 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:24:57.412929 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:24:57.412929 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 00:24:57.682670 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:24:58.368838 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 00:24:58.368838 ignition[855]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:24:58.371683 ignition[855]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:24:58.407397 ignition[855]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:24:58.409576 ignition[855]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:24:58.409576 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:24:58.409576 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:24:58.409576 ignition[855]: INFO : files: files passed Sep 9 00:24:58.409576 ignition[855]: INFO : Ignition finished successfully Sep 9 00:24:58.418870 kernel: kauditd_printk_skb: 24 callbacks suppressed Sep 9 00:24:58.418894 kernel: audit: type=1130 audit(1757377498.410:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.409729 systemd[1]: Finished ignition-files.service. Sep 9 00:24:58.412313 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 9 00:24:58.420863 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 9 00:24:58.416107 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 9 00:24:58.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.426435 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:24:58.431920 kernel: audit: type=1130 audit(1757377498.421:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.431944 kernel: audit: type=1130 audit(1757377498.426:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.431954 kernel: audit: type=1131 audit(1757377498.426:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.417035 systemd[1]: Starting ignition-quench.service... Sep 9 00:24:58.421776 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 9 00:24:58.423030 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:24:58.423101 systemd[1]: Finished ignition-quench.service. Sep 9 00:24:58.427175 systemd[1]: Reached target ignition-complete.target. Sep 9 00:24:58.435531 systemd[1]: Starting initrd-parse-etc.service... Sep 9 00:24:58.448791 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:24:58.448890 systemd[1]: Finished initrd-parse-etc.service. Sep 9 00:24:58.454362 kernel: audit: type=1130 audit(1757377498.450:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.454384 kernel: audit: type=1131 audit(1757377498.450:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.450335 systemd[1]: Reached target initrd-fs.target. Sep 9 00:24:58.454893 systemd[1]: Reached target initrd.target. Sep 9 00:24:58.455939 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 9 00:24:58.456782 systemd[1]: Starting dracut-pre-pivot.service... Sep 9 00:24:58.467488 systemd[1]: Finished dracut-pre-pivot.service. Sep 9 00:24:58.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.470997 kernel: audit: type=1130 audit(1757377498.468:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.469033 systemd[1]: Starting initrd-cleanup.service... Sep 9 00:24:58.477522 systemd[1]: Stopped target nss-lookup.target. Sep 9 00:24:58.478277 systemd[1]: Stopped target remote-cryptsetup.target. Sep 9 00:24:58.479045 systemd[1]: Stopped target timers.target. Sep 9 00:24:58.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.480282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:24:58.489601 kernel: audit: type=1131 audit(1757377498.483:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.480389 systemd[1]: Stopped dracut-pre-pivot.service. Sep 9 00:24:58.483743 systemd[1]: Stopped target initrd.target. Sep 9 00:24:58.486901 systemd[1]: Stopped target basic.target. Sep 9 00:24:58.490218 systemd[1]: Stopped target ignition-complete.target. Sep 9 00:24:58.491434 systemd[1]: Stopped target ignition-diskful.target. Sep 9 00:24:58.492556 systemd[1]: Stopped target initrd-root-device.target. Sep 9 00:24:58.493757 systemd[1]: Stopped target remote-fs.target. Sep 9 00:24:58.494952 systemd[1]: Stopped target remote-fs-pre.target. Sep 9 00:24:58.496248 systemd[1]: Stopped target sysinit.target. Sep 9 00:24:58.497425 systemd[1]: Stopped target local-fs.target. Sep 9 00:24:58.498566 systemd[1]: Stopped target local-fs-pre.target. Sep 9 00:24:58.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.499737 systemd[1]: Stopped target swap.target. Sep 9 00:24:58.506639 kernel: audit: type=1131 audit(1757377498.502:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.501043 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:24:58.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.510006 kernel: audit: type=1131 audit(1757377498.507:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.501155 systemd[1]: Stopped dracut-pre-mount.service. Sep 9 00:24:58.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.502264 systemd[1]: Stopped target cryptsetup.target. Sep 9 00:24:58.506186 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:24:58.506289 systemd[1]: Stopped dracut-initqueue.service. Sep 9 00:24:58.507385 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:24:58.507473 systemd[1]: Stopped ignition-fetch-offline.service. Sep 9 00:24:58.510821 systemd[1]: Stopped target paths.target. Sep 9 00:24:58.511774 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:24:58.515995 systemd[1]: Stopped systemd-ask-password-console.path. Sep 9 00:24:58.517268 systemd[1]: Stopped target slices.target. Sep 9 00:24:58.518478 systemd[1]: Stopped target sockets.target. Sep 9 00:24:58.519621 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:24:58.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.519729 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 9 00:24:58.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.520916 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:24:58.521024 systemd[1]: Stopped ignition-files.service. Sep 9 00:24:58.523363 systemd[1]: Stopping ignition-mount.service... Sep 9 00:24:58.526148 iscsid[744]: iscsid shutting down. Sep 9 00:24:58.524667 systemd[1]: Stopping iscsid.service... Sep 9 00:24:58.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.525486 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:24:58.525597 systemd[1]: Stopped kmod-static-nodes.service. Sep 9 00:24:58.527614 systemd[1]: Stopping sysroot-boot.service... Sep 9 00:24:58.528367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:24:58.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.528490 systemd[1]: Stopped systemd-udev-trigger.service. Sep 9 00:24:58.529784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:24:58.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.529869 systemd[1]: Stopped dracut-pre-trigger.service. Sep 9 00:24:58.535626 ignition[895]: INFO : Ignition 2.14.0 Sep 9 00:24:58.535626 ignition[895]: INFO : Stage: umount Sep 9 00:24:58.535626 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:24:58.535626 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:24:58.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.532438 systemd[1]: iscsid.service: Deactivated successfully. Sep 9 00:24:58.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.542165 ignition[895]: INFO : umount: umount passed Sep 9 00:24:58.542165 ignition[895]: INFO : Ignition finished successfully Sep 9 00:24:58.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.532542 systemd[1]: Stopped iscsid.service. Sep 9 00:24:58.533937 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:24:58.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.534019 systemd[1]: Closed iscsid.socket. Sep 9 00:24:58.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.535048 systemd[1]: Stopping iscsiuio.service... Sep 9 00:24:58.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.536259 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:24:58.536340 systemd[1]: Finished initrd-cleanup.service. Sep 9 00:24:58.539254 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:24:58.539330 systemd[1]: Stopped ignition-mount.service. Sep 9 00:24:58.541341 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:24:58.541731 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 9 00:24:58.541830 systemd[1]: Stopped iscsiuio.service. Sep 9 00:24:58.542734 systemd[1]: Stopped target network.target. Sep 9 00:24:58.544096 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:24:58.544130 systemd[1]: Closed iscsiuio.socket. Sep 9 00:24:58.545452 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:24:58.545494 systemd[1]: Stopped ignition-disks.service. Sep 9 00:24:58.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.546650 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:24:58.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.546694 systemd[1]: Stopped ignition-kargs.service. Sep 9 00:24:58.548703 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:24:58.548747 systemd[1]: Stopped ignition-setup.service. Sep 9 00:24:58.549919 systemd[1]: Stopping systemd-networkd.service... Sep 9 00:24:58.569000 audit: BPF prog-id=6 op=UNLOAD Sep 9 00:24:58.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.551217 systemd[1]: Stopping systemd-resolved.service... Sep 9 00:24:58.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.557108 systemd-networkd[739]: eth0: DHCPv6 lease lost Sep 9 00:24:58.570000 audit: BPF prog-id=9 op=UNLOAD Sep 9 00:24:58.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.558788 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:24:58.558896 systemd[1]: Stopped systemd-networkd.service. Sep 9 00:24:58.561199 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:24:58.561287 systemd[1]: Stopped systemd-resolved.service. Sep 9 00:24:58.562809 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:24:58.562841 systemd[1]: Closed systemd-networkd.socket. Sep 9 00:24:58.564472 systemd[1]: Stopping network-cleanup.service... Sep 9 00:24:58.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.565938 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:24:58.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.566017 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 9 00:24:58.569601 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:24:58.569654 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:24:58.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.571383 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:24:58.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.571427 systemd[1]: Stopped systemd-modules-load.service. Sep 9 00:24:58.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.572429 systemd[1]: Stopping systemd-udevd.service... Sep 9 00:24:58.576738 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:24:58.581333 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:24:58.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.581469 systemd[1]: Stopped systemd-udevd.service. Sep 9 00:24:58.582799 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:24:58.582890 systemd[1]: Stopped network-cleanup.service. Sep 9 00:24:58.583864 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:24:58.583902 systemd[1]: Closed systemd-udevd-control.socket. Sep 9 00:24:58.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.585137 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:24:58.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.585173 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 9 00:24:58.586493 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:24:58.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.586544 systemd[1]: Stopped dracut-pre-udev.service. Sep 9 00:24:58.589246 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:24:58.589288 systemd[1]: Stopped dracut-cmdline.service. Sep 9 00:24:58.590346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:24:58.590379 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 9 00:24:58.592629 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 9 00:24:58.593652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:24:58.593701 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 9 00:24:58.598351 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:24:58.598458 systemd[1]: Stopped sysroot-boot.service. Sep 9 00:24:58.599778 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:24:58.599859 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 9 00:24:58.600900 systemd[1]: Reached target initrd-switch-root.target. Sep 9 00:24:58.602261 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:24:58.602314 systemd[1]: Stopped initrd-setup-root.service. Sep 9 00:24:58.604343 systemd[1]: Starting initrd-switch-root.service... Sep 9 00:24:58.613598 systemd[1]: Switching root. Sep 9 00:24:58.624383 systemd-journald[291]: Journal stopped Sep 9 00:25:00.796506 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Sep 9 00:25:00.796573 kernel: SELinux: Class mctp_socket not defined in policy. Sep 9 00:25:00.796585 kernel: SELinux: Class anon_inode not defined in policy. Sep 9 00:25:00.796596 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 9 00:25:00.796605 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:25:00.796620 kernel: SELinux: policy capability open_perms=1 Sep 9 00:25:00.796630 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:25:00.796639 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:25:00.796653 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:25:00.796664 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:25:00.796673 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:25:00.796682 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:25:00.796693 systemd[1]: Successfully loaded SELinux policy in 33.641ms. Sep 9 00:25:00.796706 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.030ms. Sep 9 00:25:00.796717 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:25:00.796728 systemd[1]: Detected virtualization kvm. Sep 9 00:25:00.796738 systemd[1]: Detected architecture arm64. Sep 9 00:25:00.796749 systemd[1]: Detected first boot. Sep 9 00:25:00.796760 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:25:00.796770 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 9 00:25:00.796781 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:25:00.796791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:25:00.796804 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:25:00.796815 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:25:00.796826 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:25:00.796840 systemd[1]: Stopped initrd-switch-root.service. Sep 9 00:25:00.796851 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:25:00.796861 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 9 00:25:00.796871 systemd[1]: Created slice system-addon\x2drun.slice. Sep 9 00:25:00.796883 systemd[1]: Created slice system-getty.slice. Sep 9 00:25:00.796904 systemd[1]: Created slice system-modprobe.slice. Sep 9 00:25:00.796914 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 9 00:25:00.796925 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 9 00:25:00.796936 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 9 00:25:00.796947 systemd[1]: Created slice user.slice. Sep 9 00:25:00.796968 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:25:00.796979 systemd[1]: Started systemd-ask-password-wall.path. Sep 9 00:25:00.796989 systemd[1]: Set up automount boot.automount. Sep 9 00:25:00.797001 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 9 00:25:00.797012 systemd[1]: Stopped target initrd-switch-root.target. Sep 9 00:25:00.797023 systemd[1]: Stopped target initrd-fs.target. Sep 9 00:25:00.797033 systemd[1]: Stopped target initrd-root-fs.target. Sep 9 00:25:00.797043 systemd[1]: Reached target integritysetup.target. Sep 9 00:25:00.797053 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:25:00.797064 systemd[1]: Reached target remote-fs.target. Sep 9 00:25:00.797075 systemd[1]: Reached target slices.target. Sep 9 00:25:00.797087 systemd[1]: Reached target swap.target. Sep 9 00:25:00.797098 systemd[1]: Reached target torcx.target. Sep 9 00:25:00.797108 systemd[1]: Reached target veritysetup.target. Sep 9 00:25:00.797119 systemd[1]: Listening on systemd-coredump.socket. Sep 9 00:25:00.797129 systemd[1]: Listening on systemd-initctl.socket. Sep 9 00:25:00.797139 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:25:00.797149 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:25:00.797159 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:25:00.797169 systemd[1]: Listening on systemd-userdbd.socket. Sep 9 00:25:00.797179 systemd[1]: Mounting dev-hugepages.mount... Sep 9 00:25:00.797191 systemd[1]: Mounting dev-mqueue.mount... Sep 9 00:25:00.797201 systemd[1]: Mounting media.mount... Sep 9 00:25:00.797211 systemd[1]: Mounting sys-kernel-debug.mount... Sep 9 00:25:00.797221 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 9 00:25:00.797231 systemd[1]: Mounting tmp.mount... Sep 9 00:25:00.797241 systemd[1]: Starting flatcar-tmpfiles.service... Sep 9 00:25:00.797252 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:00.797262 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:25:00.797275 systemd[1]: Starting modprobe@configfs.service... Sep 9 00:25:00.797287 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:00.797299 systemd[1]: Starting modprobe@drm.service... Sep 9 00:25:00.797309 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:00.797319 systemd[1]: Starting modprobe@fuse.service... Sep 9 00:25:00.797330 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:00.797340 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:25:00.797355 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:25:00.797366 systemd[1]: Stopped systemd-fsck-root.service. Sep 9 00:25:00.797378 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:25:00.797389 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:25:00.797399 systemd[1]: Stopped systemd-journald.service. Sep 9 00:25:00.797409 systemd[1]: Starting systemd-journald.service... Sep 9 00:25:00.797420 kernel: fuse: init (API version 7.34) Sep 9 00:25:00.797432 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:25:00.797444 systemd[1]: Starting systemd-network-generator.service... Sep 9 00:25:00.797455 kernel: loop: module loaded Sep 9 00:25:00.797465 systemd[1]: Starting systemd-remount-fs.service... Sep 9 00:25:00.797475 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:25:00.797492 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:25:00.797504 systemd[1]: Stopped verity-setup.service. Sep 9 00:25:00.797518 systemd[1]: Mounted dev-hugepages.mount. Sep 9 00:25:00.797529 systemd[1]: Mounted dev-mqueue.mount. Sep 9 00:25:00.797539 systemd[1]: Mounted media.mount. Sep 9 00:25:00.797550 systemd[1]: Mounted sys-kernel-debug.mount. Sep 9 00:25:00.797560 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 9 00:25:00.797574 systemd-journald[989]: Journal started Sep 9 00:25:00.797617 systemd-journald[989]: Runtime Journal (/run/log/journal/3f7792760be049928cc9761df70a2a7e) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:25:00.797650 systemd[1]: Mounted tmp.mount. Sep 9 00:24:58.682000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:24:58.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:24:58.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:24:58.785000 audit: BPF prog-id=10 op=LOAD Sep 9 00:24:58.785000 audit: BPF prog-id=10 op=UNLOAD Sep 9 00:24:58.785000 audit: BPF prog-id=11 op=LOAD Sep 9 00:24:58.785000 audit: BPF prog-id=11 op=UNLOAD Sep 9 00:24:58.824000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 9 00:24:58.824000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:24:58.824000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:24:58.825000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 9 00:24:58.825000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:24:58.825000 audit: CWD cwd="/" Sep 9 00:24:58.825000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:24:58.825000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:24:58.825000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:25:00.680000 audit: BPF prog-id=12 op=LOAD Sep 9 00:25:00.680000 audit: BPF prog-id=3 op=UNLOAD Sep 9 00:25:00.680000 audit: BPF prog-id=13 op=LOAD Sep 9 00:25:00.680000 audit: BPF prog-id=14 op=LOAD Sep 9 00:25:00.680000 audit: BPF prog-id=4 op=UNLOAD Sep 9 00:25:00.680000 audit: BPF prog-id=5 op=UNLOAD Sep 9 00:25:00.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.693000 audit: BPF prog-id=12 op=UNLOAD Sep 9 00:25:00.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.768000 audit: BPF prog-id=15 op=LOAD Sep 9 00:25:00.769000 audit: BPF prog-id=16 op=LOAD Sep 9 00:25:00.770000 audit: BPF prog-id=17 op=LOAD Sep 9 00:25:00.770000 audit: BPF prog-id=13 op=UNLOAD Sep 9 00:25:00.770000 audit: BPF prog-id=14 op=UNLOAD Sep 9 00:25:00.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.794000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:25:00.794000 audit[989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe738dea0 a2=4000 a3=1 items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:25:00.794000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:25:00.679781 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:25:00.799707 systemd[1]: Started systemd-journald.service. Sep 9 00:24:58.823714 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:25:00.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.679793 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 9 00:24:58.823968 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:25:00.682626 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:24:58.823988 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:24:58.824020 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 9 00:24:58.824031 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 9 00:24:58.824059 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 9 00:24:58.824071 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 9 00:24:58.824268 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 9 00:25:00.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:24:58.824302 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:25:00.801115 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:24:58.824314 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:24:58.825009 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 9 00:25:00.802135 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:24:58.825045 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 9 00:24:58.825080 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 9 00:24:58.825095 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 9 00:24:58.825113 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 9 00:24:58.825126 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:24:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 9 00:25:00.359636 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:00Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:00.359891 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:00Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:00.360020 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:00Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:00.360206 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:00Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:25:00.360254 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:00Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 9 00:25:00.360315 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-09-09T00:25:00Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 9 00:25:00.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.803308 systemd[1]: Finished modprobe@configfs.service. Sep 9 00:25:00.804440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:00.804631 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:00.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.805673 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:25:00.805843 systemd[1]: Finished modprobe@drm.service. Sep 9 00:25:00.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.806795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:00.807254 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:00.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.808147 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:25:00.809198 systemd[1]: Finished modprobe@fuse.service. Sep 9 00:25:00.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.811377 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:00.812230 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:00.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.813241 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:25:00.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.814201 systemd[1]: Finished systemd-network-generator.service. Sep 9 00:25:00.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.815333 systemd[1]: Finished systemd-remount-fs.service. Sep 9 00:25:00.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.816567 systemd[1]: Reached target network-pre.target. Sep 9 00:25:00.818724 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 9 00:25:00.822777 systemd[1]: Mounting sys-kernel-config.mount... Sep 9 00:25:00.823459 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:25:00.827890 systemd[1]: Starting systemd-hwdb-update.service... Sep 9 00:25:00.829836 systemd[1]: Starting systemd-journal-flush.service... Sep 9 00:25:00.830689 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:25:00.831750 systemd[1]: Starting systemd-random-seed.service... Sep 9 00:25:00.832458 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:25:00.833650 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:25:00.836103 systemd-journald[989]: Time spent on flushing to /var/log/journal/3f7792760be049928cc9761df70a2a7e is 13.806ms for 990 entries. Sep 9 00:25:00.836103 systemd-journald[989]: System Journal (/var/log/journal/3f7792760be049928cc9761df70a2a7e) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:25:00.863143 systemd-journald[989]: Received client request to flush runtime journal. Sep 9 00:25:00.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.835632 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 9 00:25:00.837310 systemd[1]: Mounted sys-kernel-config.mount. Sep 9 00:25:00.848983 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:25:00.864530 udevadm[1026]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 00:25:00.850887 systemd[1]: Starting systemd-udev-settle.service... Sep 9 00:25:00.854990 systemd[1]: Finished systemd-random-seed.service. Sep 9 00:25:00.855769 systemd[1]: Reached target first-boot-complete.target. Sep 9 00:25:00.858464 systemd[1]: Finished flatcar-tmpfiles.service. Sep 9 00:25:00.860833 systemd[1]: Starting systemd-sysusers.service... Sep 9 00:25:00.865674 systemd[1]: Finished systemd-journal-flush.service. Sep 9 00:25:00.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.866788 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:25:00.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:00.878194 systemd[1]: Finished systemd-sysusers.service. Sep 9 00:25:00.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.332912 systemd[1]: Finished systemd-hwdb-update.service. Sep 9 00:25:01.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.333000 audit: BPF prog-id=18 op=LOAD Sep 9 00:25:01.333000 audit: BPF prog-id=19 op=LOAD Sep 9 00:25:01.333000 audit: BPF prog-id=7 op=UNLOAD Sep 9 00:25:01.333000 audit: BPF prog-id=8 op=UNLOAD Sep 9 00:25:01.335059 systemd[1]: Starting systemd-udevd.service... Sep 9 00:25:01.352242 systemd-udevd[1031]: Using default interface naming scheme 'v252'. Sep 9 00:25:01.366129 systemd[1]: Started systemd-udevd.service. Sep 9 00:25:01.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.373000 audit: BPF prog-id=20 op=LOAD Sep 9 00:25:01.375300 systemd[1]: Starting systemd-networkd.service... Sep 9 00:25:01.390000 audit: BPF prog-id=21 op=LOAD Sep 9 00:25:01.391000 audit: BPF prog-id=22 op=LOAD Sep 9 00:25:01.391000 audit: BPF prog-id=23 op=LOAD Sep 9 00:25:01.393007 systemd[1]: Starting systemd-userdbd.service... Sep 9 00:25:01.397385 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 9 00:25:01.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.426132 systemd[1]: Started systemd-userdbd.service. Sep 9 00:25:01.478571 systemd-networkd[1051]: lo: Link UP Sep 9 00:25:01.478580 systemd-networkd[1051]: lo: Gained carrier Sep 9 00:25:01.478944 systemd-networkd[1051]: Enumeration completed Sep 9 00:25:01.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.479064 systemd[1]: Started systemd-networkd.service. Sep 9 00:25:01.479070 systemd-networkd[1051]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:25:01.480848 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:25:01.482769 systemd-networkd[1051]: eth0: Link UP Sep 9 00:25:01.482780 systemd-networkd[1051]: eth0: Gained carrier Sep 9 00:25:01.485468 systemd[1]: Finished systemd-udev-settle.service. Sep 9 00:25:01.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.487661 systemd[1]: Starting lvm2-activation-early.service... Sep 9 00:25:01.497774 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:25:01.505096 systemd-networkd[1051]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:25:01.521926 systemd[1]: Finished lvm2-activation-early.service. Sep 9 00:25:01.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.522794 systemd[1]: Reached target cryptsetup.target. Sep 9 00:25:01.524674 systemd[1]: Starting lvm2-activation.service... Sep 9 00:25:01.528416 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:25:01.569052 systemd[1]: Finished lvm2-activation.service. Sep 9 00:25:01.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.569851 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:25:01.570577 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:25:01.570605 systemd[1]: Reached target local-fs.target. Sep 9 00:25:01.571238 systemd[1]: Reached target machines.target. Sep 9 00:25:01.573112 systemd[1]: Starting ldconfig.service... Sep 9 00:25:01.574086 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:01.574158 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:01.575327 systemd[1]: Starting systemd-boot-update.service... Sep 9 00:25:01.577010 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 9 00:25:01.578934 systemd[1]: Starting systemd-machine-id-commit.service... Sep 9 00:25:01.580827 systemd[1]: Starting systemd-sysext.service... Sep 9 00:25:01.581917 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1067 (bootctl) Sep 9 00:25:01.583259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 9 00:25:01.598757 systemd[1]: Unmounting usr-share-oem.mount... Sep 9 00:25:01.601807 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 9 00:25:01.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.613571 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 9 00:25:01.613783 systemd[1]: Unmounted usr-share-oem.mount. Sep 9 00:25:01.633981 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 00:25:01.749516 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:25:01.753654 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) Sep 9 00:25:01.753654 systemd-fsck[1077]: /dev/vda1: 236 files, 117310/258078 clusters Sep 9 00:25:01.754272 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 9 00:25:01.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.758898 systemd[1]: Mounting boot.mount... Sep 9 00:25:01.766844 systemd[1]: Mounted boot.mount. Sep 9 00:25:01.781097 kernel: loop1: detected capacity change from 0 to 211168 Sep 9 00:25:01.781008 systemd[1]: Finished systemd-boot-update.service. Sep 9 00:25:01.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.799517 (sd-sysext)[1082]: Using extensions 'kubernetes'. Sep 9 00:25:01.800250 (sd-sysext)[1082]: Merged extensions into '/usr'. Sep 9 00:25:01.824809 systemd[1]: Mounting usr-share-oem.mount... Sep 9 00:25:01.825724 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:01.827003 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:01.829014 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:01.831636 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:01.832527 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:01.832663 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:01.835465 systemd[1]: Mounted usr-share-oem.mount. Sep 9 00:25:01.837017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:01.837157 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:01.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.838388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:01.838539 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:01.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.840043 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:01.840159 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:01.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.841398 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:25:01.841515 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:25:01.843544 systemd[1]: Finished systemd-sysext.service. Sep 9 00:25:01.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:01.846148 systemd[1]: Starting ensure-sysext.service... Sep 9 00:25:01.848169 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 9 00:25:01.856126 systemd[1]: Reloading. Sep 9 00:25:01.895187 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:25:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:25:01.895218 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:25:01Z" level=info msg="torcx already run" Sep 9 00:25:01.928640 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 9 00:25:01.933781 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:25:01.948550 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:25:01.961760 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:25:01.961780 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:25:01.979495 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:25:01.987461 ldconfig[1066]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:25:02.025000 audit: BPF prog-id=24 op=LOAD Sep 9 00:25:02.025000 audit: BPF prog-id=21 op=UNLOAD Sep 9 00:25:02.026000 audit: BPF prog-id=25 op=LOAD Sep 9 00:25:02.026000 audit: BPF prog-id=26 op=LOAD Sep 9 00:25:02.026000 audit: BPF prog-id=22 op=UNLOAD Sep 9 00:25:02.026000 audit: BPF prog-id=23 op=UNLOAD Sep 9 00:25:02.026000 audit: BPF prog-id=27 op=LOAD Sep 9 00:25:02.026000 audit: BPF prog-id=28 op=LOAD Sep 9 00:25:02.026000 audit: BPF prog-id=18 op=UNLOAD Sep 9 00:25:02.026000 audit: BPF prog-id=19 op=UNLOAD Sep 9 00:25:02.026000 audit: BPF prog-id=29 op=LOAD Sep 9 00:25:02.026000 audit: BPF prog-id=15 op=UNLOAD Sep 9 00:25:02.026000 audit: BPF prog-id=30 op=LOAD Sep 9 00:25:02.026000 audit: BPF prog-id=31 op=LOAD Sep 9 00:25:02.026000 audit: BPF prog-id=16 op=UNLOAD Sep 9 00:25:02.027000 audit: BPF prog-id=17 op=UNLOAD Sep 9 00:25:02.027000 audit: BPF prog-id=32 op=LOAD Sep 9 00:25:02.027000 audit: BPF prog-id=20 op=UNLOAD Sep 9 00:25:02.039057 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.040232 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:02.041983 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:02.043576 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:02.044182 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.044303 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:02.045086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:02.045210 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:02.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.046211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:02.046317 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:02.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.047306 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:02.047409 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:02.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.049400 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.050598 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:02.052286 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:02.054192 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:02.054773 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.054890 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:02.055666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:02.055783 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:02.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.056788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:02.056887 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:02.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.058032 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:02.058139 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:02.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.061084 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.062185 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:25:02.063757 systemd[1]: Starting modprobe@drm.service... Sep 9 00:25:02.065414 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:25:02.067049 systemd[1]: Starting modprobe@loop.service... Sep 9 00:25:02.067691 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.067815 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:02.068930 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 9 00:25:02.070677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:25:02.070789 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:25:02.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.071904 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:25:02.072033 systemd[1]: Finished modprobe@drm.service. Sep 9 00:25:02.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.073064 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 9 00:25:02.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.074088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:25:02.074195 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:25:02.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.075244 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:25:02.075353 systemd[1]: Finished modprobe@loop.service. Sep 9 00:25:02.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.077863 systemd[1]: Starting audit-rules.service... Sep 9 00:25:02.079720 systemd[1]: Starting clean-ca-certificates.service... Sep 9 00:25:02.081503 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 9 00:25:02.082701 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:25:02.082764 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.082000 audit: BPF prog-id=33 op=LOAD Sep 9 00:25:02.084307 systemd[1]: Starting systemd-resolved.service... Sep 9 00:25:02.084000 audit: BPF prog-id=34 op=LOAD Sep 9 00:25:02.086667 systemd[1]: Starting systemd-timesyncd.service... Sep 9 00:25:02.088556 systemd[1]: Starting systemd-update-utmp.service... Sep 9 00:25:02.089872 systemd[1]: Finished ensure-sysext.service. Sep 9 00:25:02.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.091033 systemd[1]: Finished clean-ca-certificates.service. Sep 9 00:25:02.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.092831 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:25:02.097000 audit[1170]: SYSTEM_BOOT pid=1170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.099258 systemd[1]: Finished systemd-update-utmp.service. Sep 9 00:25:02.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.101057 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:25:02.101618 systemd[1]: Finished systemd-machine-id-commit.service. Sep 9 00:25:02.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.104687 systemd[1]: Finished ldconfig.service. Sep 9 00:25:02.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.115755 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 9 00:25:02.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.117908 systemd[1]: Starting systemd-update-done.service... Sep 9 00:25:02.128064 systemd[1]: Finished systemd-update-done.service. Sep 9 00:25:02.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:25:02.128000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 9 00:25:02.128000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd89b6730 a2=420 a3=0 items=0 ppid=1159 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:25:02.128000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 9 00:25:02.129698 augenrules[1181]: No rules Sep 9 00:25:02.130276 systemd[1]: Finished audit-rules.service. Sep 9 00:25:02.135616 systemd-resolved[1163]: Positive Trust Anchors: Sep 9 00:25:02.135876 systemd-resolved[1163]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:25:02.135974 systemd-resolved[1163]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:25:02.144984 systemd-resolved[1163]: Defaulting to hostname 'linux'. Sep 9 00:25:02.146537 systemd[1]: Started systemd-resolved.service. Sep 9 00:25:02.147331 systemd-timesyncd[1166]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:25:02.147387 systemd-timesyncd[1166]: Initial clock synchronization to Tue 2025-09-09 00:25:02.291162 UTC. Sep 9 00:25:02.147423 systemd[1]: Started systemd-timesyncd.service. Sep 9 00:25:02.148161 systemd[1]: Reached target network.target. Sep 9 00:25:02.148748 systemd[1]: Reached target nss-lookup.target. Sep 9 00:25:02.149388 systemd[1]: Reached target sysinit.target. Sep 9 00:25:02.150039 systemd[1]: Started motdgen.path. Sep 9 00:25:02.150570 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 9 00:25:02.151404 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 9 00:25:02.152049 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:25:02.152078 systemd[1]: Reached target paths.target. Sep 9 00:25:02.152657 systemd[1]: Reached target time-set.target. Sep 9 00:25:02.153412 systemd[1]: Started logrotate.timer. Sep 9 00:25:02.154069 systemd[1]: Started mdadm.timer. Sep 9 00:25:02.154574 systemd[1]: Reached target timers.target. Sep 9 00:25:02.155453 systemd[1]: Listening on dbus.socket. Sep 9 00:25:02.157085 systemd[1]: Starting docker.socket... Sep 9 00:25:02.160170 systemd[1]: Listening on sshd.socket. Sep 9 00:25:02.160853 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:02.161301 systemd[1]: Listening on docker.socket. Sep 9 00:25:02.161950 systemd[1]: Reached target sockets.target. Sep 9 00:25:02.162553 systemd[1]: Reached target basic.target. Sep 9 00:25:02.163137 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.163163 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:25:02.164082 systemd[1]: Starting containerd.service... Sep 9 00:25:02.165597 systemd[1]: Starting dbus.service... Sep 9 00:25:02.167150 systemd[1]: Starting enable-oem-cloudinit.service... Sep 9 00:25:02.168930 systemd[1]: Starting extend-filesystems.service... Sep 9 00:25:02.169912 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 9 00:25:02.171785 jq[1191]: false Sep 9 00:25:02.171080 systemd[1]: Starting motdgen.service... Sep 9 00:25:02.172763 systemd[1]: Starting prepare-helm.service... Sep 9 00:25:02.174524 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 9 00:25:02.176833 systemd[1]: Starting sshd-keygen.service... Sep 9 00:25:02.181300 systemd[1]: Starting systemd-logind.service... Sep 9 00:25:02.182272 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:25:02.182348 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:25:02.182752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:25:02.183776 systemd[1]: Starting update-engine.service... Sep 9 00:25:02.185805 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 9 00:25:02.186524 extend-filesystems[1192]: Found loop1 Sep 9 00:25:02.188262 extend-filesystems[1192]: Found vda Sep 9 00:25:02.189145 jq[1209]: true Sep 9 00:25:02.188354 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:25:02.188550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 9 00:25:02.189532 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:25:02.189709 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 9 00:25:02.190715 extend-filesystems[1192]: Found vda1 Sep 9 00:25:02.192824 extend-filesystems[1192]: Found vda2 Sep 9 00:25:02.194778 extend-filesystems[1192]: Found vda3 Sep 9 00:25:02.196158 extend-filesystems[1192]: Found usr Sep 9 00:25:02.197171 extend-filesystems[1192]: Found vda4 Sep 9 00:25:02.198113 extend-filesystems[1192]: Found vda6 Sep 9 00:25:02.198113 extend-filesystems[1192]: Found vda7 Sep 9 00:25:02.198113 extend-filesystems[1192]: Found vda9 Sep 9 00:25:02.198113 extend-filesystems[1192]: Checking size of /dev/vda9 Sep 9 00:25:02.199168 dbus-daemon[1190]: [system] SELinux support is enabled Sep 9 00:25:02.209743 tar[1211]: linux-arm64/LICENSE Sep 9 00:25:02.209743 tar[1211]: linux-arm64/helm Sep 9 00:25:02.209918 jq[1213]: true Sep 9 00:25:02.199362 systemd[1]: Started dbus.service. Sep 9 00:25:02.205066 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:25:02.205099 systemd[1]: Reached target system-config.target. Sep 9 00:25:02.205868 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:25:02.205884 systemd[1]: Reached target user-config.target. Sep 9 00:25:02.215362 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:25:02.215553 systemd[1]: Finished motdgen.service. Sep 9 00:25:02.226023 extend-filesystems[1192]: Resized partition /dev/vda9 Sep 9 00:25:02.235469 extend-filesystems[1235]: resize2fs 1.46.5 (30-Dec-2021) Sep 9 00:25:02.246986 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:25:02.250338 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:25:02.251634 systemd-logind[1203]: New seat seat0. Sep 9 00:25:02.256063 systemd[1]: Started systemd-logind.service. Sep 9 00:25:02.266422 update_engine[1207]: I0909 00:25:02.266150 1207 main.cc:92] Flatcar Update Engine starting Sep 9 00:25:02.275600 update_engine[1207]: I0909 00:25:02.270013 1207 update_check_scheduler.cc:74] Next update check in 11m58s Sep 9 00:25:02.269621 systemd[1]: Started update-engine.service. Sep 9 00:25:02.273519 systemd[1]: Started locksmithd.service. Sep 9 00:25:02.283576 env[1214]: time="2025-09-09T00:25:02.283508160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 9 00:25:02.300827 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:25:02.300888 extend-filesystems[1235]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:25:02.300888 extend-filesystems[1235]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:25:02.300888 extend-filesystems[1235]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:25:02.305858 extend-filesystems[1192]: Resized filesystem in /dev/vda9 Sep 9 00:25:02.303025 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.302338760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.302494600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.303904600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.303934160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.304155440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.304173120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.304186600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.304196440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.304271280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307244 env[1214]: time="2025-09-09T00:25:02.304572320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307456 bash[1240]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:25:02.303216 systemd[1]: Finished extend-filesystems.service. Sep 9 00:25:02.307624 env[1214]: time="2025-09-09T00:25:02.304706480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:25:02.307624 env[1214]: time="2025-09-09T00:25:02.304722560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:25:02.307624 env[1214]: time="2025-09-09T00:25:02.304775080Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 9 00:25:02.307624 env[1214]: time="2025-09-09T00:25:02.304786480Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:25:02.307302 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 9 00:25:02.311248 env[1214]: time="2025-09-09T00:25:02.311194960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:25:02.311248 env[1214]: time="2025-09-09T00:25:02.311233080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:25:02.311248 env[1214]: time="2025-09-09T00:25:02.311251360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:25:02.311391 env[1214]: time="2025-09-09T00:25:02.311291800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311391 env[1214]: time="2025-09-09T00:25:02.311308760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311432 env[1214]: time="2025-09-09T00:25:02.311336440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311432 env[1214]: time="2025-09-09T00:25:02.311409920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311779 env[1214]: time="2025-09-09T00:25:02.311754720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311814 env[1214]: time="2025-09-09T00:25:02.311783640Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311814 env[1214]: time="2025-09-09T00:25:02.311799320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311855 env[1214]: time="2025-09-09T00:25:02.311812040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.311855 env[1214]: time="2025-09-09T00:25:02.311825200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:25:02.312026 env[1214]: time="2025-09-09T00:25:02.311981960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:25:02.312026 env[1214]: time="2025-09-09T00:25:02.312070520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:25:02.312352 env[1214]: time="2025-09-09T00:25:02.312331200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:25:02.312393 env[1214]: time="2025-09-09T00:25:02.312361840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312393 env[1214]: time="2025-09-09T00:25:02.312376000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:25:02.312505 env[1214]: time="2025-09-09T00:25:02.312489480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312539 env[1214]: time="2025-09-09T00:25:02.312507760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312539 env[1214]: time="2025-09-09T00:25:02.312521840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312539 env[1214]: time="2025-09-09T00:25:02.312532800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312638 env[1214]: time="2025-09-09T00:25:02.312555960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312638 env[1214]: time="2025-09-09T00:25:02.312571520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312638 env[1214]: time="2025-09-09T00:25:02.312583320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312638 env[1214]: time="2025-09-09T00:25:02.312594520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312638 env[1214]: time="2025-09-09T00:25:02.312608440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:25:02.312766 env[1214]: time="2025-09-09T00:25:02.312743520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312795 env[1214]: time="2025-09-09T00:25:02.312767680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312795 env[1214]: time="2025-09-09T00:25:02.312780600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.312795 env[1214]: time="2025-09-09T00:25:02.312792520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:25:02.312850 env[1214]: time="2025-09-09T00:25:02.312806800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 9 00:25:02.312850 env[1214]: time="2025-09-09T00:25:02.312817600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:25:02.312850 env[1214]: time="2025-09-09T00:25:02.312833800Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 9 00:25:02.312910 env[1214]: time="2025-09-09T00:25:02.312866920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:25:02.313142 env[1214]: time="2025-09-09T00:25:02.313083080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:25:02.314010 env[1214]: time="2025-09-09T00:25:02.313143400Z" level=info msg="Connect containerd service" Sep 9 00:25:02.314010 env[1214]: time="2025-09-09T00:25:02.313174400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:25:02.314010 env[1214]: time="2025-09-09T00:25:02.313993200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:25:02.314284 env[1214]: time="2025-09-09T00:25:02.314181880Z" level=info msg="Start subscribing containerd event" Sep 9 00:25:02.314284 env[1214]: time="2025-09-09T00:25:02.314229480Z" level=info msg="Start recovering state" Sep 9 00:25:02.314338 env[1214]: time="2025-09-09T00:25:02.314292040Z" level=info msg="Start event monitor" Sep 9 00:25:02.314338 env[1214]: time="2025-09-09T00:25:02.314309880Z" level=info msg="Start snapshots syncer" Sep 9 00:25:02.314338 env[1214]: time="2025-09-09T00:25:02.314319720Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:25:02.314338 env[1214]: time="2025-09-09T00:25:02.314326720Z" level=info msg="Start streaming server" Sep 9 00:25:02.314716 env[1214]: time="2025-09-09T00:25:02.314680720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:25:02.314746 env[1214]: time="2025-09-09T00:25:02.314737920Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:25:02.314891 systemd[1]: Started containerd.service. Sep 9 00:25:02.321523 env[1214]: time="2025-09-09T00:25:02.314806160Z" level=info msg="containerd successfully booted in 0.042890s" Sep 9 00:25:02.333622 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:25:02.628616 tar[1211]: linux-arm64/README.md Sep 9 00:25:02.633073 systemd[1]: Finished prepare-helm.service. Sep 9 00:25:03.463257 sshd_keygen[1212]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:25:03.482259 systemd[1]: Finished sshd-keygen.service. Sep 9 00:25:03.485170 systemd[1]: Starting issuegen.service... Sep 9 00:25:03.491942 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:25:03.492359 systemd[1]: Finished issuegen.service. Sep 9 00:25:03.496399 systemd[1]: Starting systemd-user-sessions.service... Sep 9 00:25:03.501963 systemd[1]: Finished systemd-user-sessions.service. Sep 9 00:25:03.504402 systemd[1]: Started getty@tty1.service. Sep 9 00:25:03.506718 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 9 00:25:03.507871 systemd[1]: Reached target getty.target. Sep 9 00:25:03.528284 systemd-networkd[1051]: eth0: Gained IPv6LL Sep 9 00:25:03.530024 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 9 00:25:03.532478 systemd[1]: Reached target network-online.target. Sep 9 00:25:03.535363 systemd[1]: Starting kubelet.service... Sep 9 00:25:04.306392 systemd[1]: Started kubelet.service. Sep 9 00:25:04.308298 systemd[1]: Reached target multi-user.target. Sep 9 00:25:04.310688 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 9 00:25:04.318092 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 9 00:25:04.318263 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 9 00:25:04.319553 systemd[1]: Startup finished in 554ms (kernel) + 5.091s (initrd) + 5.671s (userspace) = 11.318s. Sep 9 00:25:04.768766 kubelet[1271]: E0909 00:25:04.768562 1271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:25:04.771736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:25:04.771852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:25:05.735823 systemd[1]: Created slice system-sshd.slice. Sep 9 00:25:05.739134 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:33090.service. Sep 9 00:25:05.797476 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 33090 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:25:05.800837 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:25:05.812813 systemd-logind[1203]: New session 1 of user core. Sep 9 00:25:05.812861 systemd[1]: Created slice user-500.slice. Sep 9 00:25:05.814041 systemd[1]: Starting user-runtime-dir@500.service... Sep 9 00:25:05.825022 systemd[1]: Finished user-runtime-dir@500.service. Sep 9 00:25:05.829004 systemd[1]: Starting user@500.service... Sep 9 00:25:05.831935 (systemd)[1283]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:25:05.900242 systemd[1283]: Queued start job for default target default.target. Sep 9 00:25:05.900731 systemd[1283]: Reached target paths.target. Sep 9 00:25:05.900763 systemd[1283]: Reached target sockets.target. Sep 9 00:25:05.900779 systemd[1283]: Reached target timers.target. Sep 9 00:25:05.900789 systemd[1283]: Reached target basic.target. Sep 9 00:25:05.900826 systemd[1283]: Reached target default.target. Sep 9 00:25:05.900865 systemd[1283]: Startup finished in 62ms. Sep 9 00:25:05.901368 systemd[1]: Started user@500.service. Sep 9 00:25:05.903579 systemd[1]: Started session-1.scope. Sep 9 00:25:05.963068 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:33092.service. Sep 9 00:25:06.006311 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 33092 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:25:06.008149 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:25:06.012018 systemd-logind[1203]: New session 2 of user core. Sep 9 00:25:06.013211 systemd[1]: Started session-2.scope. Sep 9 00:25:06.072993 sshd[1292]: pam_unix(sshd:session): session closed for user core Sep 9 00:25:06.077193 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:33102.service. Sep 9 00:25:06.077799 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:33092.service: Deactivated successfully. Sep 9 00:25:06.078455 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:25:06.079032 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:25:06.079985 systemd-logind[1203]: Removed session 2. Sep 9 00:25:06.110378 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 33102 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:25:06.111512 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:25:06.116741 systemd-logind[1203]: New session 3 of user core. Sep 9 00:25:06.117849 systemd[1]: Started session-3.scope. Sep 9 00:25:06.174521 sshd[1297]: pam_unix(sshd:session): session closed for user core Sep 9 00:25:06.180248 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:33102.service: Deactivated successfully. Sep 9 00:25:06.181625 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:25:06.184466 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:33116.service. Sep 9 00:25:06.185964 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:25:06.187326 systemd-logind[1203]: Removed session 3. Sep 9 00:25:06.220153 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 33116 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:25:06.221762 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:25:06.226519 systemd-logind[1203]: New session 4 of user core. Sep 9 00:25:06.228175 systemd[1]: Started session-4.scope. Sep 9 00:25:06.290804 sshd[1304]: pam_unix(sshd:session): session closed for user core Sep 9 00:25:06.295829 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:33116.service: Deactivated successfully. Sep 9 00:25:06.296502 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:25:06.298847 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:25:06.301392 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:33124.service. Sep 9 00:25:06.302387 systemd-logind[1203]: Removed session 4. Sep 9 00:25:06.334829 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 33124 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:25:06.336184 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:25:06.341593 systemd-logind[1203]: New session 5 of user core. Sep 9 00:25:06.342119 systemd[1]: Started session-5.scope. Sep 9 00:25:06.405661 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:25:06.405869 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:25:06.446673 systemd[1]: Starting docker.service... Sep 9 00:25:06.505238 env[1326]: time="2025-09-09T00:25:06.505164760Z" level=info msg="Starting up" Sep 9 00:25:06.507328 env[1326]: time="2025-09-09T00:25:06.507284158Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:25:06.507328 env[1326]: time="2025-09-09T00:25:06.507313143Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:25:06.507328 env[1326]: time="2025-09-09T00:25:06.507332129Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:25:06.507447 env[1326]: time="2025-09-09T00:25:06.507343302Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:25:06.509778 env[1326]: time="2025-09-09T00:25:06.509735953Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 9 00:25:06.509778 env[1326]: time="2025-09-09T00:25:06.509764655Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 9 00:25:06.509778 env[1326]: time="2025-09-09T00:25:06.509779917Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 9 00:25:06.509923 env[1326]: time="2025-09-09T00:25:06.509788742Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 9 00:25:06.790315 env[1326]: time="2025-09-09T00:25:06.790226009Z" level=info msg="Loading containers: start." Sep 9 00:25:06.927376 kernel: Initializing XFRM netlink socket Sep 9 00:25:06.958855 env[1326]: time="2025-09-09T00:25:06.958806631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 9 00:25:07.011233 systemd-networkd[1051]: docker0: Link UP Sep 9 00:25:07.033742 env[1326]: time="2025-09-09T00:25:07.033177114Z" level=info msg="Loading containers: done." Sep 9 00:25:07.048112 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3858438608-merged.mount: Deactivated successfully. Sep 9 00:25:07.055396 env[1326]: time="2025-09-09T00:25:07.055353844Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:25:07.055758 env[1326]: time="2025-09-09T00:25:07.055735709Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 9 00:25:07.055940 env[1326]: time="2025-09-09T00:25:07.055923428Z" level=info msg="Daemon has completed initialization" Sep 9 00:25:07.071993 systemd[1]: Started docker.service. Sep 9 00:25:07.079899 env[1326]: time="2025-09-09T00:25:07.079846218Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:25:07.755829 env[1214]: time="2025-09-09T00:25:07.755548346Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:25:08.481546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376264577.mount: Deactivated successfully. Sep 9 00:25:10.036047 env[1214]: time="2025-09-09T00:25:10.036002544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:10.037241 env[1214]: time="2025-09-09T00:25:10.037215009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:10.040086 env[1214]: time="2025-09-09T00:25:10.040056541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:10.042818 env[1214]: time="2025-09-09T00:25:10.042791204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:10.043623 env[1214]: time="2025-09-09T00:25:10.043597017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 9 00:25:10.046230 env[1214]: time="2025-09-09T00:25:10.046199955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:25:11.648972 env[1214]: time="2025-09-09T00:25:11.648906258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:11.651391 env[1214]: time="2025-09-09T00:25:11.651350675Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:11.653037 env[1214]: time="2025-09-09T00:25:11.653009706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:11.655687 env[1214]: time="2025-09-09T00:25:11.655648155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:11.656007 env[1214]: time="2025-09-09T00:25:11.655980396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 9 00:25:11.657339 env[1214]: time="2025-09-09T00:25:11.657314834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:25:13.255640 env[1214]: time="2025-09-09T00:25:13.255594993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:13.256942 env[1214]: time="2025-09-09T00:25:13.256913121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:13.260586 env[1214]: time="2025-09-09T00:25:13.260554835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:13.262460 env[1214]: time="2025-09-09T00:25:13.262418332Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:13.263365 env[1214]: time="2025-09-09T00:25:13.263336859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 9 00:25:13.264531 env[1214]: time="2025-09-09T00:25:13.264501504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:25:14.403085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063637577.mount: Deactivated successfully. Sep 9 00:25:14.883466 env[1214]: time="2025-09-09T00:25:14.883340871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:14.884747 env[1214]: time="2025-09-09T00:25:14.884669310Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:14.885953 env[1214]: time="2025-09-09T00:25:14.885868575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:14.887038 env[1214]: time="2025-09-09T00:25:14.887006227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:14.887346 env[1214]: time="2025-09-09T00:25:14.887319237Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 00:25:14.888029 env[1214]: time="2025-09-09T00:25:14.888002334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:25:15.024411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:25:15.024587 systemd[1]: Stopped kubelet.service. Sep 9 00:25:15.025945 systemd[1]: Starting kubelet.service... Sep 9 00:25:15.122539 systemd[1]: Started kubelet.service. Sep 9 00:25:15.237050 kubelet[1462]: E0909 00:25:15.236924 1462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:25:15.239600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:25:15.239725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:25:15.621930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255667432.mount: Deactivated successfully. Sep 9 00:25:16.869212 env[1214]: time="2025-09-09T00:25:16.869164741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:16.870703 env[1214]: time="2025-09-09T00:25:16.870674111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:16.873443 env[1214]: time="2025-09-09T00:25:16.873414973Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:16.875807 env[1214]: time="2025-09-09T00:25:16.875781974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:16.876692 env[1214]: time="2025-09-09T00:25:16.876662035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 9 00:25:16.877363 env[1214]: time="2025-09-09T00:25:16.877329545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:25:17.383284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075428792.mount: Deactivated successfully. Sep 9 00:25:17.389266 env[1214]: time="2025-09-09T00:25:17.389215291Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:17.391768 env[1214]: time="2025-09-09T00:25:17.391727200Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:17.393514 env[1214]: time="2025-09-09T00:25:17.393480370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:17.394914 env[1214]: time="2025-09-09T00:25:17.394872461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:17.395419 env[1214]: time="2025-09-09T00:25:17.395388689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:25:17.396293 env[1214]: time="2025-09-09T00:25:17.396200014Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:25:17.846477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109541010.mount: Deactivated successfully. Sep 9 00:25:20.108379 env[1214]: time="2025-09-09T00:25:20.108323175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:20.110669 env[1214]: time="2025-09-09T00:25:20.110607772Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:20.114550 env[1214]: time="2025-09-09T00:25:20.114502357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:20.116045 env[1214]: time="2025-09-09T00:25:20.116011196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:20.116889 env[1214]: time="2025-09-09T00:25:20.116852436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 9 00:25:25.315731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:25:25.315937 systemd[1]: Stopped kubelet.service. Sep 9 00:25:25.317289 systemd[1]: Starting kubelet.service... Sep 9 00:25:25.328179 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:25:25.328245 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:25:25.328442 systemd[1]: Stopped kubelet.service. Sep 9 00:25:25.330499 systemd[1]: Starting kubelet.service... Sep 9 00:25:25.360064 systemd[1]: Reloading. Sep 9 00:25:25.421077 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2025-09-09T00:25:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:25:25.421111 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2025-09-09T00:25:25Z" level=info msg="torcx already run" Sep 9 00:25:25.565016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:25:25.565037 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:25:25.580599 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:25:25.655873 systemd[1]: Started kubelet.service. Sep 9 00:25:25.657549 systemd[1]: Stopping kubelet.service... Sep 9 00:25:25.657780 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:25:25.657995 systemd[1]: Stopped kubelet.service. Sep 9 00:25:25.659874 systemd[1]: Starting kubelet.service... Sep 9 00:25:25.767160 systemd[1]: Started kubelet.service. Sep 9 00:25:25.804210 kubelet[1567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:25:25.804210 kubelet[1567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:25:25.804210 kubelet[1567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:25:25.804616 kubelet[1567]: I0909 00:25:25.804250 1567 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:25:26.084097 kubelet[1567]: I0909 00:25:26.084046 1567 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:25:26.084097 kubelet[1567]: I0909 00:25:26.084080 1567 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:25:26.084376 kubelet[1567]: I0909 00:25:26.084344 1567 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:25:26.107349 kubelet[1567]: E0909 00:25:26.107000 1567 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:25:26.107975 kubelet[1567]: I0909 00:25:26.107074 1567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:25:26.116550 kubelet[1567]: E0909 00:25:26.116524 1567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:25:26.116670 kubelet[1567]: I0909 00:25:26.116656 1567 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:25:26.119434 kubelet[1567]: I0909 00:25:26.119413 1567 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:25:26.120778 kubelet[1567]: I0909 00:25:26.120730 1567 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:25:26.120920 kubelet[1567]: I0909 00:25:26.120774 1567 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:25:26.121067 kubelet[1567]: I0909 00:25:26.121055 1567 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:25:26.121067 kubelet[1567]: I0909 00:25:26.121069 1567 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:25:26.121333 kubelet[1567]: I0909 00:25:26.121304 1567 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:25:26.124300 kubelet[1567]: I0909 00:25:26.124274 1567 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:25:26.124300 kubelet[1567]: I0909 00:25:26.124300 1567 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:25:26.124404 kubelet[1567]: I0909 00:25:26.124335 1567 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:25:26.124404 kubelet[1567]: I0909 00:25:26.124352 1567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:25:26.125777 kubelet[1567]: I0909 00:25:26.125745 1567 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:25:26.126734 kubelet[1567]: I0909 00:25:26.126649 1567 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:25:26.126846 kubelet[1567]: W0909 00:25:26.126814 1567 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:25:26.129434 kubelet[1567]: I0909 00:25:26.129414 1567 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:25:26.129490 kubelet[1567]: I0909 00:25:26.129455 1567 server.go:1289] "Started kubelet" Sep 9 00:25:26.132448 kubelet[1567]: E0909 00:25:26.132410 1567 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:25:26.132580 kubelet[1567]: I0909 00:25:26.132556 1567 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:25:26.149144 kubelet[1567]: E0909 00:25:26.149116 1567 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:25:26.150116 kubelet[1567]: I0909 00:25:26.150049 1567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:25:26.150496 kubelet[1567]: I0909 00:25:26.150475 1567 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:25:26.152074 kubelet[1567]: I0909 00:25:26.152055 1567 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:25:26.154637 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 9 00:25:26.154898 kubelet[1567]: I0909 00:25:26.154881 1567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:25:26.157022 kubelet[1567]: E0909 00:25:26.153882 1567 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863758c8d331197 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:25:26.129430935 +0000 UTC m=+0.355350040,LastTimestamp:2025-09-09 00:25:26.129430935 +0000 UTC m=+0.355350040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:25:26.157022 kubelet[1567]: E0909 00:25:26.155366 1567 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:25:26.157022 kubelet[1567]: I0909 00:25:26.155408 1567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:25:26.158687 kubelet[1567]: E0909 00:25:26.158631 1567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:25:26.158810 kubelet[1567]: I0909 00:25:26.158792 1567 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:25:26.158908 kubelet[1567]: I0909 00:25:26.158889 1567 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:25:26.159074 kubelet[1567]: I0909 00:25:26.159058 1567 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:25:26.159561 kubelet[1567]: E0909 00:25:26.159526 1567 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:25:26.160636 kubelet[1567]: I0909 00:25:26.160606 1567 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:25:26.160636 kubelet[1567]: I0909 00:25:26.160629 1567 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:25:26.160760 kubelet[1567]: I0909 00:25:26.160732 1567 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:25:26.160943 kubelet[1567]: E0909 00:25:26.160906 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Sep 9 00:25:26.163797 kubelet[1567]: I0909 00:25:26.163768 1567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:25:26.174080 kubelet[1567]: I0909 00:25:26.174058 1567 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:25:26.174080 kubelet[1567]: I0909 00:25:26.174076 1567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:25:26.174189 kubelet[1567]: I0909 00:25:26.174107 1567 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:25:26.178742 kubelet[1567]: I0909 00:25:26.178718 1567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:25:26.178864 kubelet[1567]: I0909 00:25:26.178853 1567 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:25:26.178949 kubelet[1567]: I0909 00:25:26.178930 1567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:25:26.179029 kubelet[1567]: I0909 00:25:26.179020 1567 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:25:26.179130 kubelet[1567]: E0909 00:25:26.179111 1567 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:25:26.247891 kubelet[1567]: I0909 00:25:26.247688 1567 policy_none.go:49] "None policy: Start" Sep 9 00:25:26.247891 kubelet[1567]: I0909 00:25:26.247725 1567 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:25:26.247891 kubelet[1567]: I0909 00:25:26.247738 1567 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:25:26.248332 kubelet[1567]: E0909 00:25:26.248305 1567 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:25:26.252097 systemd[1]: Created slice kubepods.slice. Sep 9 00:25:26.255852 systemd[1]: Created slice kubepods-burstable.slice. Sep 9 00:25:26.258632 systemd[1]: Created slice kubepods-besteffort.slice. Sep 9 00:25:26.259049 kubelet[1567]: E0909 00:25:26.259023 1567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:25:26.269697 kubelet[1567]: E0909 00:25:26.269662 1567 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:25:26.269838 kubelet[1567]: I0909 00:25:26.269817 1567 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:25:26.269887 kubelet[1567]: I0909 00:25:26.269836 1567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:25:26.270138 kubelet[1567]: I0909 00:25:26.270111 1567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:25:26.271366 kubelet[1567]: E0909 00:25:26.271192 1567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:25:26.271366 kubelet[1567]: E0909 00:25:26.271226 1567 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:25:26.291253 systemd[1]: Created slice kubepods-burstable-pod324efaf519b562a26e4a5d00b574f1da.slice. Sep 9 00:25:26.299855 kubelet[1567]: E0909 00:25:26.299813 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:26.304522 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:25:26.306118 kubelet[1567]: E0909 00:25:26.306093 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:26.307350 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:25:26.308683 kubelet[1567]: E0909 00:25:26.308660 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:26.361528 kubelet[1567]: E0909 00:25:26.361416 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Sep 9 00:25:26.371633 kubelet[1567]: I0909 00:25:26.371602 1567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:25:26.372071 kubelet[1567]: E0909 00:25:26.372043 1567 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 9 00:25:26.460682 kubelet[1567]: I0909 00:25:26.460628 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/324efaf519b562a26e4a5d00b574f1da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"324efaf519b562a26e4a5d00b574f1da\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:26.460839 kubelet[1567]: I0909 00:25:26.460712 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:26.460839 kubelet[1567]: I0909 00:25:26.460744 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:26.460839 kubelet[1567]: I0909 00:25:26.460768 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:26.460839 kubelet[1567]: I0909 00:25:26.460786 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/324efaf519b562a26e4a5d00b574f1da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"324efaf519b562a26e4a5d00b574f1da\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:26.460839 kubelet[1567]: I0909 00:25:26.460820 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:26.460946 kubelet[1567]: I0909 00:25:26.460846 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:26.460946 kubelet[1567]: I0909 00:25:26.460868 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:25:26.460946 kubelet[1567]: I0909 00:25:26.460882 1567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/324efaf519b562a26e4a5d00b574f1da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"324efaf519b562a26e4a5d00b574f1da\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:26.573248 kubelet[1567]: I0909 00:25:26.573218 1567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:25:26.573709 kubelet[1567]: E0909 00:25:26.573676 1567 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 9 00:25:26.601023 kubelet[1567]: E0909 00:25:26.600985 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:26.601820 env[1214]: time="2025-09-09T00:25:26.601748386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:324efaf519b562a26e4a5d00b574f1da,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:26.607158 kubelet[1567]: E0909 00:25:26.607133 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:26.607555 env[1214]: time="2025-09-09T00:25:26.607523999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:26.609818 kubelet[1567]: E0909 00:25:26.609799 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:26.610274 env[1214]: time="2025-09-09T00:25:26.610242024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:26.762912 kubelet[1567]: E0909 00:25:26.762799 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Sep 9 00:25:26.974970 kubelet[1567]: I0909 00:25:26.974913 1567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:25:26.975333 kubelet[1567]: E0909 00:25:26.975302 1567 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 9 00:25:27.094486 kubelet[1567]: E0909 00:25:27.094335 1567 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:25:27.125373 kubelet[1567]: E0909 00:25:27.125235 1567 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:25:27.155815 kubelet[1567]: E0909 00:25:27.155524 1567 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:25:27.173166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108530355.mount: Deactivated successfully. Sep 9 00:25:27.182796 env[1214]: time="2025-09-09T00:25:27.182746202Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.189070 env[1214]: time="2025-09-09T00:25:27.188282479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.192434 env[1214]: time="2025-09-09T00:25:27.191632843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.195776 env[1214]: time="2025-09-09T00:25:27.193583745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.197876 env[1214]: time="2025-09-09T00:25:27.196652383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.200783 env[1214]: time="2025-09-09T00:25:27.199112978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.203678 env[1214]: time="2025-09-09T00:25:27.203026472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.203944 env[1214]: time="2025-09-09T00:25:27.203923126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.206304 env[1214]: time="2025-09-09T00:25:27.204745725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.207034 env[1214]: time="2025-09-09T00:25:27.206935322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.210212 env[1214]: time="2025-09-09T00:25:27.210175445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.212783 env[1214]: time="2025-09-09T00:25:27.212754566Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:27.231328 env[1214]: time="2025-09-09T00:25:27.231260382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:27.231328 env[1214]: time="2025-09-09T00:25:27.231302973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:27.231485 env[1214]: time="2025-09-09T00:25:27.231312940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:27.231635 env[1214]: time="2025-09-09T00:25:27.231602271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46c72843fd991d6f3cfcf68e4087e5c899aff1d9f022f43502db675a18317956 pid=1612 runtime=io.containerd.runc.v2 Sep 9 00:25:27.249420 systemd[1]: Started cri-containerd-46c72843fd991d6f3cfcf68e4087e5c899aff1d9f022f43502db675a18317956.scope. Sep 9 00:25:27.257994 env[1214]: time="2025-09-09T00:25:27.257875391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:27.257994 env[1214]: time="2025-09-09T00:25:27.257940639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:27.258219 env[1214]: time="2025-09-09T00:25:27.258173168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:27.258689 env[1214]: time="2025-09-09T00:25:27.258633024Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f5d249ee97b5de3ee29be15afe64afa1c210e4c75fe870de9603b47fc0cc3dd pid=1643 runtime=io.containerd.runc.v2 Sep 9 00:25:27.265602 env[1214]: time="2025-09-09T00:25:27.265536378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:27.265702 env[1214]: time="2025-09-09T00:25:27.265578889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:27.265702 env[1214]: time="2025-09-09T00:25:27.265589137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:27.265804 env[1214]: time="2025-09-09T00:25:27.265716589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/809abff37c37e9c48b5c1859a60fbd2d996800a2a775170f0d990eee1f3d9ced pid=1667 runtime=io.containerd.runc.v2 Sep 9 00:25:27.272380 systemd[1]: Started cri-containerd-5f5d249ee97b5de3ee29be15afe64afa1c210e4c75fe870de9603b47fc0cc3dd.scope. Sep 9 00:25:27.281467 systemd[1]: Started cri-containerd-809abff37c37e9c48b5c1859a60fbd2d996800a2a775170f0d990eee1f3d9ced.scope. Sep 9 00:25:27.299912 env[1214]: time="2025-09-09T00:25:27.298583198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c72843fd991d6f3cfcf68e4087e5c899aff1d9f022f43502db675a18317956\"" Sep 9 00:25:27.300024 kubelet[1567]: E0909 00:25:27.299499 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:27.304119 env[1214]: time="2025-09-09T00:25:27.304083409Z" level=info msg="CreateContainer within sandbox \"46c72843fd991d6f3cfcf68e4087e5c899aff1d9f022f43502db675a18317956\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:25:27.320322 env[1214]: time="2025-09-09T00:25:27.320264569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:324efaf519b562a26e4a5d00b574f1da,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f5d249ee97b5de3ee29be15afe64afa1c210e4c75fe870de9603b47fc0cc3dd\"" Sep 9 00:25:27.321057 kubelet[1567]: E0909 00:25:27.321031 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:27.324468 env[1214]: time="2025-09-09T00:25:27.324430087Z" level=info msg="CreateContainer within sandbox \"5f5d249ee97b5de3ee29be15afe64afa1c210e4c75fe870de9603b47fc0cc3dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:25:27.329359 env[1214]: time="2025-09-09T00:25:27.329327899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"809abff37c37e9c48b5c1859a60fbd2d996800a2a775170f0d990eee1f3d9ced\"" Sep 9 00:25:27.330070 kubelet[1567]: E0909 00:25:27.330049 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:27.334131 env[1214]: time="2025-09-09T00:25:27.334077963Z" level=info msg="CreateContainer within sandbox \"46c72843fd991d6f3cfcf68e4087e5c899aff1d9f022f43502db675a18317956\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bfd8a86f11e7c91cd8f418dd114105d02f2ede352c9106ee54635ab0e5005b64\"" Sep 9 00:25:27.334339 env[1214]: time="2025-09-09T00:25:27.334207898Z" level=info msg="CreateContainer within sandbox \"809abff37c37e9c48b5c1859a60fbd2d996800a2a775170f0d990eee1f3d9ced\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:25:27.334983 env[1214]: time="2025-09-09T00:25:27.334942153Z" level=info msg="StartContainer for \"bfd8a86f11e7c91cd8f418dd114105d02f2ede352c9106ee54635ab0e5005b64\"" Sep 9 00:25:27.340301 env[1214]: time="2025-09-09T00:25:27.340251065Z" level=info msg="CreateContainer within sandbox \"5f5d249ee97b5de3ee29be15afe64afa1c210e4c75fe870de9603b47fc0cc3dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0876b52975a8c95a80915bcf2fdcae675df61dded0e7427a22dddb5349cec95f\"" Sep 9 00:25:27.340676 env[1214]: time="2025-09-09T00:25:27.340651197Z" level=info msg="StartContainer for \"0876b52975a8c95a80915bcf2fdcae675df61dded0e7427a22dddb5349cec95f\"" Sep 9 00:25:27.349206 env[1214]: time="2025-09-09T00:25:27.348356616Z" level=info msg="CreateContainer within sandbox \"809abff37c37e9c48b5c1859a60fbd2d996800a2a775170f0d990eee1f3d9ced\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68e5ac491ffa8ff7694f9f8dbfe827dcee12394a651857ec1eb68dcbda9e6b99\"" Sep 9 00:25:27.350309 env[1214]: time="2025-09-09T00:25:27.350269891Z" level=info msg="StartContainer for \"68e5ac491ffa8ff7694f9f8dbfe827dcee12394a651857ec1eb68dcbda9e6b99\"" Sep 9 00:25:27.353266 systemd[1]: Started cri-containerd-bfd8a86f11e7c91cd8f418dd114105d02f2ede352c9106ee54635ab0e5005b64.scope. Sep 9 00:25:27.372641 systemd[1]: Started cri-containerd-0876b52975a8c95a80915bcf2fdcae675df61dded0e7427a22dddb5349cec95f.scope. Sep 9 00:25:27.375600 systemd[1]: Started cri-containerd-68e5ac491ffa8ff7694f9f8dbfe827dcee12394a651857ec1eb68dcbda9e6b99.scope. Sep 9 00:25:27.401939 env[1214]: time="2025-09-09T00:25:27.401894780Z" level=info msg="StartContainer for \"bfd8a86f11e7c91cd8f418dd114105d02f2ede352c9106ee54635ab0e5005b64\" returns successfully" Sep 9 00:25:27.411572 env[1214]: time="2025-09-09T00:25:27.410789106Z" level=info msg="StartContainer for \"0876b52975a8c95a80915bcf2fdcae675df61dded0e7427a22dddb5349cec95f\" returns successfully" Sep 9 00:25:27.444370 env[1214]: time="2025-09-09T00:25:27.444313154Z" level=info msg="StartContainer for \"68e5ac491ffa8ff7694f9f8dbfe827dcee12394a651857ec1eb68dcbda9e6b99\" returns successfully" Sep 9 00:25:27.778869 kubelet[1567]: I0909 00:25:27.778082 1567 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:25:28.185407 kubelet[1567]: E0909 00:25:28.185321 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:28.185811 kubelet[1567]: E0909 00:25:28.185794 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:28.187676 kubelet[1567]: E0909 00:25:28.187655 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:28.187858 kubelet[1567]: E0909 00:25:28.187843 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:28.189301 kubelet[1567]: E0909 00:25:28.189280 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:28.189481 kubelet[1567]: E0909 00:25:28.189454 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:29.191661 kubelet[1567]: E0909 00:25:29.191626 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:29.192194 kubelet[1567]: E0909 00:25:29.191728 1567 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:25:29.192320 kubelet[1567]: E0909 00:25:29.192304 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:29.192423 kubelet[1567]: E0909 00:25:29.192320 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:29.489832 kubelet[1567]: E0909 00:25:29.489549 1567 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:25:29.586912 kubelet[1567]: I0909 00:25:29.586876 1567 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:25:29.659540 kubelet[1567]: I0909 00:25:29.659507 1567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:29.665544 kubelet[1567]: E0909 00:25:29.665515 1567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:29.665672 kubelet[1567]: I0909 00:25:29.665660 1567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:29.668912 kubelet[1567]: E0909 00:25:29.668888 1567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:29.669034 kubelet[1567]: I0909 00:25:29.669022 1567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:25:29.670890 kubelet[1567]: E0909 00:25:29.670869 1567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:25:30.134324 kubelet[1567]: I0909 00:25:30.134284 1567 apiserver.go:52] "Watching apiserver" Sep 9 00:25:30.159858 kubelet[1567]: I0909 00:25:30.159825 1567 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:25:30.191139 kubelet[1567]: I0909 00:25:30.191110 1567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:25:30.193539 kubelet[1567]: E0909 00:25:30.193325 1567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:25:30.193539 kubelet[1567]: E0909 00:25:30.193510 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:31.736061 systemd[1]: Reloading. Sep 9 00:25:31.809442 /usr/lib/systemd/system-generators/torcx-generator[1875]: time="2025-09-09T00:25:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:25:31.809471 /usr/lib/systemd/system-generators/torcx-generator[1875]: time="2025-09-09T00:25:31Z" level=info msg="torcx already run" Sep 9 00:25:31.880616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:25:31.880638 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:25:31.896463 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:25:31.989835 systemd[1]: Stopping kubelet.service... Sep 9 00:25:32.000483 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:25:32.000772 systemd[1]: Stopped kubelet.service. Sep 9 00:25:32.002643 systemd[1]: Starting kubelet.service... Sep 9 00:25:32.108630 systemd[1]: Started kubelet.service. Sep 9 00:25:32.161661 kubelet[1917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:25:32.161661 kubelet[1917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:25:32.161661 kubelet[1917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:25:32.162092 kubelet[1917]: I0909 00:25:32.161810 1917 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:25:32.172401 kubelet[1917]: I0909 00:25:32.172352 1917 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:25:32.172401 kubelet[1917]: I0909 00:25:32.172386 1917 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:25:32.172616 kubelet[1917]: I0909 00:25:32.172599 1917 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:25:32.173831 kubelet[1917]: I0909 00:25:32.173800 1917 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:25:32.177468 kubelet[1917]: I0909 00:25:32.177423 1917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:25:32.183770 kubelet[1917]: E0909 00:25:32.183742 1917 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:25:32.183770 kubelet[1917]: I0909 00:25:32.183769 1917 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:25:32.186018 kubelet[1917]: I0909 00:25:32.185993 1917 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:25:32.186204 kubelet[1917]: I0909 00:25:32.186170 1917 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:25:32.186328 kubelet[1917]: I0909 00:25:32.186196 1917 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:25:32.186401 kubelet[1917]: I0909 00:25:32.186338 1917 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:25:32.186401 kubelet[1917]: I0909 00:25:32.186346 1917 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:25:32.186401 kubelet[1917]: I0909 00:25:32.186384 1917 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:25:32.186504 kubelet[1917]: I0909 00:25:32.186492 1917 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:25:32.186532 kubelet[1917]: I0909 00:25:32.186507 1917 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:25:32.186532 kubelet[1917]: I0909 00:25:32.186530 1917 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:25:32.186576 kubelet[1917]: I0909 00:25:32.186542 1917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:25:32.189564 kubelet[1917]: I0909 00:25:32.189537 1917 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:25:32.190290 kubelet[1917]: I0909 00:25:32.190263 1917 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:25:32.192325 kubelet[1917]: I0909 00:25:32.192304 1917 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:25:32.192461 kubelet[1917]: I0909 00:25:32.192448 1917 server.go:1289] "Started kubelet" Sep 9 00:25:32.192743 kubelet[1917]: I0909 00:25:32.192713 1917 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:25:32.192990 kubelet[1917]: I0909 00:25:32.192909 1917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:25:32.193285 kubelet[1917]: I0909 00:25:32.193265 1917 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:25:32.194114 kubelet[1917]: I0909 00:25:32.194094 1917 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:25:32.199952 kubelet[1917]: I0909 00:25:32.199910 1917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:25:32.209719 kubelet[1917]: E0909 00:25:32.209674 1917 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:25:32.210516 kubelet[1917]: I0909 00:25:32.210480 1917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:25:32.212431 kubelet[1917]: I0909 00:25:32.212408 1917 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:25:32.212684 kubelet[1917]: I0909 00:25:32.212668 1917 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:25:32.212905 kubelet[1917]: I0909 00:25:32.212879 1917 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:25:32.214041 kubelet[1917]: I0909 00:25:32.214017 1917 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:25:32.214175 kubelet[1917]: I0909 00:25:32.214150 1917 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:25:32.215303 kubelet[1917]: I0909 00:25:32.215282 1917 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:25:32.225786 kubelet[1917]: I0909 00:25:32.225738 1917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:25:32.227130 kubelet[1917]: I0909 00:25:32.226947 1917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:25:32.227130 kubelet[1917]: I0909 00:25:32.227009 1917 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:25:32.227130 kubelet[1917]: I0909 00:25:32.227030 1917 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:25:32.227130 kubelet[1917]: I0909 00:25:32.227037 1917 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:25:32.227130 kubelet[1917]: E0909 00:25:32.227086 1917 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:25:32.250478 kubelet[1917]: I0909 00:25:32.250379 1917 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:25:32.250478 kubelet[1917]: I0909 00:25:32.250395 1917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:25:32.250478 kubelet[1917]: I0909 00:25:32.250416 1917 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:25:32.251930 kubelet[1917]: I0909 00:25:32.251771 1917 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:25:32.251930 kubelet[1917]: I0909 00:25:32.251796 1917 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:25:32.251930 kubelet[1917]: I0909 00:25:32.251817 1917 policy_none.go:49] "None policy: Start" Sep 9 00:25:32.251930 kubelet[1917]: I0909 00:25:32.251828 1917 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:25:32.251930 kubelet[1917]: I0909 00:25:32.251839 1917 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:25:32.251930 kubelet[1917]: I0909 00:25:32.251927 1917 state_mem.go:75] "Updated machine memory state" Sep 9 00:25:32.255429 kubelet[1917]: E0909 00:25:32.255393 1917 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:25:32.255563 kubelet[1917]: I0909 00:25:32.255549 1917 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:25:32.255601 kubelet[1917]: I0909 00:25:32.255565 1917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:25:32.256320 kubelet[1917]: I0909 00:25:32.256303 1917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:25:32.257751 kubelet[1917]: E0909 00:25:32.257726 1917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:25:32.328946 kubelet[1917]: I0909 00:25:32.328911 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:32.329271 kubelet[1917]: I0909 00:25:32.329253 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:32.329415 kubelet[1917]: I0909 00:25:32.329392 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:25:32.361702 kubelet[1917]: I0909 00:25:32.361679 1917 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:25:32.368155 kubelet[1917]: I0909 00:25:32.368105 1917 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:25:32.368333 kubelet[1917]: I0909 00:25:32.368320 1917 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:25:32.514309 kubelet[1917]: I0909 00:25:32.513727 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/324efaf519b562a26e4a5d00b574f1da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"324efaf519b562a26e4a5d00b574f1da\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:32.514309 kubelet[1917]: I0909 00:25:32.513784 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:32.514309 kubelet[1917]: I0909 00:25:32.513806 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:32.514309 kubelet[1917]: I0909 00:25:32.513828 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:25:32.514309 kubelet[1917]: I0909 00:25:32.513845 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/324efaf519b562a26e4a5d00b574f1da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"324efaf519b562a26e4a5d00b574f1da\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:32.514895 kubelet[1917]: I0909 00:25:32.513859 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/324efaf519b562a26e4a5d00b574f1da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"324efaf519b562a26e4a5d00b574f1da\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:32.514895 kubelet[1917]: I0909 00:25:32.513872 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:32.514895 kubelet[1917]: I0909 00:25:32.513888 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:32.514895 kubelet[1917]: I0909 00:25:32.513925 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:25:32.636427 kubelet[1917]: E0909 00:25:32.636390 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:32.636635 kubelet[1917]: E0909 00:25:32.636430 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:32.638122 kubelet[1917]: E0909 00:25:32.637846 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:32.727626 sudo[1957]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:25:32.729364 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 9 00:25:33.189122 sudo[1957]: pam_unix(sudo:session): session closed for user root Sep 9 00:25:33.191804 kubelet[1917]: I0909 00:25:33.189514 1917 apiserver.go:52] "Watching apiserver" Sep 9 00:25:33.213453 kubelet[1917]: I0909 00:25:33.213419 1917 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:25:33.234663 kubelet[1917]: I0909 00:25:33.234524 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:33.235758 kubelet[1917]: E0909 00:25:33.234931 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:33.235758 kubelet[1917]: E0909 00:25:33.234997 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:33.246176 kubelet[1917]: E0909 00:25:33.246140 1917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:25:33.246355 kubelet[1917]: E0909 00:25:33.246336 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:33.259072 kubelet[1917]: I0909 00:25:33.259020 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.259004282 podStartE2EDuration="1.259004282s" podCreationTimestamp="2025-09-09 00:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:25:33.258240032 +0000 UTC m=+1.143578716" watchObservedRunningTime="2025-09-09 00:25:33.259004282 +0000 UTC m=+1.144342966" Sep 9 00:25:33.276833 kubelet[1917]: I0909 00:25:33.276783 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.276768259 podStartE2EDuration="1.276768259s" podCreationTimestamp="2025-09-09 00:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:25:33.269518405 +0000 UTC m=+1.154857089" watchObservedRunningTime="2025-09-09 00:25:33.276768259 +0000 UTC m=+1.162106943" Sep 9 00:25:34.239501 kubelet[1917]: E0909 00:25:34.239458 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:34.240130 kubelet[1917]: E0909 00:25:34.240088 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:35.244377 kubelet[1917]: E0909 00:25:35.244343 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:36.026230 sudo[1314]: pam_unix(sudo:session): session closed for user root Sep 9 00:25:36.027641 sshd[1310]: pam_unix(sshd:session): session closed for user core Sep 9 00:25:36.030201 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:33124.service: Deactivated successfully. Sep 9 00:25:36.030907 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:25:36.031088 systemd[1]: session-5.scope: Consumed 8.213s CPU time. Sep 9 00:25:36.031521 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:25:36.032249 systemd-logind[1203]: Removed session 5. Sep 9 00:25:37.790241 kubelet[1917]: E0909 00:25:37.790191 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:37.816698 kubelet[1917]: I0909 00:25:37.816641 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.816626088 podStartE2EDuration="5.816626088s" podCreationTimestamp="2025-09-09 00:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:25:33.277160947 +0000 UTC m=+1.162499591" watchObservedRunningTime="2025-09-09 00:25:37.816626088 +0000 UTC m=+5.701964812" Sep 9 00:25:37.909818 kubelet[1917]: I0909 00:25:37.909784 1917 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:25:37.910085 env[1214]: time="2025-09-09T00:25:37.910042670Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:25:37.910346 kubelet[1917]: I0909 00:25:37.910230 1917 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:25:38.248976 kubelet[1917]: E0909 00:25:38.248906 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:38.288355 kubelet[1917]: E0909 00:25:38.288320 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:38.514725 systemd[1]: Created slice kubepods-besteffort-pod318567a7_06c6_4e96_904a_8d9e4a497a6b.slice. Sep 9 00:25:38.524690 systemd[1]: Created slice kubepods-burstable-pod02d354d5_fc1e_46d1_8030_6be66b8a4427.slice. Sep 9 00:25:38.650526 kubelet[1917]: I0909 00:25:38.650487 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/318567a7-06c6-4e96-904a-8d9e4a497a6b-kube-proxy\") pod \"kube-proxy-tblxb\" (UID: \"318567a7-06c6-4e96-904a-8d9e4a497a6b\") " pod="kube-system/kube-proxy-tblxb" Sep 9 00:25:38.650735 kubelet[1917]: I0909 00:25:38.650717 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/318567a7-06c6-4e96-904a-8d9e4a497a6b-xtables-lock\") pod \"kube-proxy-tblxb\" (UID: \"318567a7-06c6-4e96-904a-8d9e4a497a6b\") " pod="kube-system/kube-proxy-tblxb" Sep 9 00:25:38.650818 kubelet[1917]: I0909 00:25:38.650806 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/318567a7-06c6-4e96-904a-8d9e4a497a6b-lib-modules\") pod \"kube-proxy-tblxb\" (UID: \"318567a7-06c6-4e96-904a-8d9e4a497a6b\") " pod="kube-system/kube-proxy-tblxb" Sep 9 00:25:38.650891 kubelet[1917]: I0909 00:25:38.650878 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-etc-cni-netd\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651001 kubelet[1917]: I0909 00:25:38.650984 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5vr\" (UniqueName: \"kubernetes.io/projected/318567a7-06c6-4e96-904a-8d9e4a497a6b-kube-api-access-lz5vr\") pod \"kube-proxy-tblxb\" (UID: \"318567a7-06c6-4e96-904a-8d9e4a497a6b\") " pod="kube-system/kube-proxy-tblxb" Sep 9 00:25:38.651082 kubelet[1917]: I0909 00:25:38.651070 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cni-path\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651151 kubelet[1917]: I0909 00:25:38.651139 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-lib-modules\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651224 kubelet[1917]: I0909 00:25:38.651212 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-config-path\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651294 kubelet[1917]: I0909 00:25:38.651282 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-bpf-maps\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651373 kubelet[1917]: I0909 00:25:38.651358 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-cgroup\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651446 kubelet[1917]: I0909 00:25:38.651434 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-xtables-lock\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651519 kubelet[1917]: I0909 00:25:38.651507 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4xg\" (UniqueName: \"kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-kube-api-access-bj4xg\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651598 kubelet[1917]: I0909 00:25:38.651585 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-hostproc\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651674 kubelet[1917]: I0909 00:25:38.651662 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02d354d5-fc1e-46d1-8030-6be66b8a4427-clustermesh-secrets\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651766 kubelet[1917]: I0909 00:25:38.651752 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-net\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651838 kubelet[1917]: I0909 00:25:38.651826 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-kernel\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.651904 kubelet[1917]: I0909 00:25:38.651893 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-hubble-tls\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.652004 kubelet[1917]: I0909 00:25:38.651990 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-run\") pod \"cilium-pmwh9\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " pod="kube-system/cilium-pmwh9" Sep 9 00:25:38.754460 kubelet[1917]: I0909 00:25:38.754426 1917 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 9 00:25:38.822719 kubelet[1917]: E0909 00:25:38.822578 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:38.823355 env[1214]: time="2025-09-09T00:25:38.823239485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tblxb,Uid:318567a7-06c6-4e96-904a-8d9e4a497a6b,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:38.828144 kubelet[1917]: E0909 00:25:38.828115 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:38.828816 env[1214]: time="2025-09-09T00:25:38.828764569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pmwh9,Uid:02d354d5-fc1e-46d1-8030-6be66b8a4427,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:38.843916 env[1214]: time="2025-09-09T00:25:38.843816810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:38.843916 env[1214]: time="2025-09-09T00:25:38.843858699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:38.843916 env[1214]: time="2025-09-09T00:25:38.843868982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:38.844366 env[1214]: time="2025-09-09T00:25:38.844321600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c350fb96a20dbca2a18950e0e55b8933fac895b43697d2e3edb0c3cdceaa858 pid=2018 runtime=io.containerd.runc.v2 Sep 9 00:25:38.847587 env[1214]: time="2025-09-09T00:25:38.847509135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:38.847587 env[1214]: time="2025-09-09T00:25:38.847547143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:38.847587 env[1214]: time="2025-09-09T00:25:38.847558706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:38.847740 env[1214]: time="2025-09-09T00:25:38.847705698Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c pid=2035 runtime=io.containerd.runc.v2 Sep 9 00:25:38.857179 systemd[1]: Started cri-containerd-3c350fb96a20dbca2a18950e0e55b8933fac895b43697d2e3edb0c3cdceaa858.scope. Sep 9 00:25:38.861787 systemd[1]: Started cri-containerd-df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c.scope. Sep 9 00:25:38.906616 env[1214]: time="2025-09-09T00:25:38.906572730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pmwh9,Uid:02d354d5-fc1e-46d1-8030-6be66b8a4427,Namespace:kube-system,Attempt:0,} returns sandbox id \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\"" Sep 9 00:25:38.907151 env[1214]: time="2025-09-09T00:25:38.906884758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tblxb,Uid:318567a7-06c6-4e96-904a-8d9e4a497a6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c350fb96a20dbca2a18950e0e55b8933fac895b43697d2e3edb0c3cdceaa858\"" Sep 9 00:25:38.909527 kubelet[1917]: E0909 00:25:38.908840 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:38.909527 kubelet[1917]: E0909 00:25:38.909149 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:38.914685 env[1214]: time="2025-09-09T00:25:38.914635008Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:25:38.917550 env[1214]: time="2025-09-09T00:25:38.917126631Z" level=info msg="CreateContainer within sandbox \"3c350fb96a20dbca2a18950e0e55b8933fac895b43697d2e3edb0c3cdceaa858\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:25:38.942705 env[1214]: time="2025-09-09T00:25:38.942661477Z" level=info msg="CreateContainer within sandbox \"3c350fb96a20dbca2a18950e0e55b8933fac895b43697d2e3edb0c3cdceaa858\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2ba93fa4fed0e451fbbfc4b8722d6a7d64be6dba5bc995c2d7cf42a22c1c372\"" Sep 9 00:25:38.943903 env[1214]: time="2025-09-09T00:25:38.943870621Z" level=info msg="StartContainer for \"c2ba93fa4fed0e451fbbfc4b8722d6a7d64be6dba5bc995c2d7cf42a22c1c372\"" Sep 9 00:25:38.963579 systemd[1]: Started cri-containerd-c2ba93fa4fed0e451fbbfc4b8722d6a7d64be6dba5bc995c2d7cf42a22c1c372.scope. Sep 9 00:25:38.993998 env[1214]: time="2025-09-09T00:25:38.993929613Z" level=info msg="StartContainer for \"c2ba93fa4fed0e451fbbfc4b8722d6a7d64be6dba5bc995c2d7cf42a22c1c372\" returns successfully" Sep 9 00:25:39.136785 systemd[1]: Created slice kubepods-besteffort-pod9ced117e_cc94_4eb5_a11d_164a70205435.slice. Sep 9 00:25:39.155442 kubelet[1917]: I0909 00:25:39.155401 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99r92\" (UniqueName: \"kubernetes.io/projected/9ced117e-cc94-4eb5-a11d-164a70205435-kube-api-access-99r92\") pod \"cilium-operator-6c4d7847fc-7zdnh\" (UID: \"9ced117e-cc94-4eb5-a11d-164a70205435\") " pod="kube-system/cilium-operator-6c4d7847fc-7zdnh" Sep 9 00:25:39.155585 kubelet[1917]: I0909 00:25:39.155443 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ced117e-cc94-4eb5-a11d-164a70205435-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7zdnh\" (UID: \"9ced117e-cc94-4eb5-a11d-164a70205435\") " pod="kube-system/cilium-operator-6c4d7847fc-7zdnh" Sep 9 00:25:39.251709 kubelet[1917]: E0909 00:25:39.251676 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:39.252276 kubelet[1917]: E0909 00:25:39.252253 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:39.441985 kubelet[1917]: E0909 00:25:39.441548 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:39.442418 env[1214]: time="2025-09-09T00:25:39.442320338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7zdnh,Uid:9ced117e-cc94-4eb5-a11d-164a70205435,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:39.461841 env[1214]: time="2025-09-09T00:25:39.461662646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:39.461841 env[1214]: time="2025-09-09T00:25:39.461702454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:39.461841 env[1214]: time="2025-09-09T00:25:39.461712536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:39.462052 env[1214]: time="2025-09-09T00:25:39.461877170Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0 pid=2271 runtime=io.containerd.runc.v2 Sep 9 00:25:39.472344 systemd[1]: Started cri-containerd-2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0.scope. Sep 9 00:25:39.514934 env[1214]: time="2025-09-09T00:25:39.514835208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7zdnh,Uid:9ced117e-cc94-4eb5-a11d-164a70205435,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0\"" Sep 9 00:25:39.515790 kubelet[1917]: E0909 00:25:39.515763 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:40.255972 kubelet[1917]: E0909 00:25:40.255930 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:43.751327 kubelet[1917]: E0909 00:25:43.751288 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:43.768140 kubelet[1917]: I0909 00:25:43.768064 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tblxb" podStartSLOduration=5.768049976 podStartE2EDuration="5.768049976s" podCreationTimestamp="2025-09-09 00:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:25:39.266520174 +0000 UTC m=+7.151858858" watchObservedRunningTime="2025-09-09 00:25:43.768049976 +0000 UTC m=+11.653388660" Sep 9 00:25:43.819154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822908181.mount: Deactivated successfully. Sep 9 00:25:46.133636 env[1214]: time="2025-09-09T00:25:46.133581408Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:46.136149 env[1214]: time="2025-09-09T00:25:46.136108806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:46.138020 env[1214]: time="2025-09-09T00:25:46.137982511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:46.138635 env[1214]: time="2025-09-09T00:25:46.138594838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:25:46.140735 env[1214]: time="2025-09-09T00:25:46.139941189Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:25:46.146102 env[1214]: time="2025-09-09T00:25:46.146032212Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:25:46.158305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount672732048.mount: Deactivated successfully. Sep 9 00:25:46.165467 env[1214]: time="2025-09-09T00:25:46.165383113Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\"" Sep 9 00:25:46.166701 env[1214]: time="2025-09-09T00:25:46.165930110Z" level=info msg="StartContainer for \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\"" Sep 9 00:25:46.194013 systemd[1]: Started cri-containerd-2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f.scope. Sep 9 00:25:46.236273 systemd[1]: cri-containerd-2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f.scope: Deactivated successfully. Sep 9 00:25:46.278650 env[1214]: time="2025-09-09T00:25:46.278591511Z" level=info msg="StartContainer for \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\" returns successfully" Sep 9 00:25:46.369677 env[1214]: time="2025-09-09T00:25:46.369601444Z" level=info msg="shim disconnected" id=2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f Sep 9 00:25:46.369677 env[1214]: time="2025-09-09T00:25:46.369668414Z" level=warning msg="cleaning up after shim disconnected" id=2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f namespace=k8s.io Sep 9 00:25:46.369677 env[1214]: time="2025-09-09T00:25:46.369678295Z" level=info msg="cleaning up dead shim" Sep 9 00:25:46.377502 env[1214]: time="2025-09-09T00:25:46.377446716Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:25:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2355 runtime=io.containerd.runc.v2\n" Sep 9 00:25:47.155278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f-rootfs.mount: Deactivated successfully. Sep 9 00:25:47.286271 kubelet[1917]: E0909 00:25:47.286241 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:47.296613 env[1214]: time="2025-09-09T00:25:47.296502872Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:25:47.319087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632230989.mount: Deactivated successfully. Sep 9 00:25:47.324564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1634854236.mount: Deactivated successfully. Sep 9 00:25:47.329705 env[1214]: time="2025-09-09T00:25:47.329652454Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\"" Sep 9 00:25:47.330355 env[1214]: time="2025-09-09T00:25:47.330325505Z" level=info msg="StartContainer for \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\"" Sep 9 00:25:47.346344 systemd[1]: Started cri-containerd-fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8.scope. Sep 9 00:25:47.387476 env[1214]: time="2025-09-09T00:25:47.387428591Z" level=info msg="StartContainer for \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\" returns successfully" Sep 9 00:25:47.390574 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:25:47.390840 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:25:47.391846 systemd[1]: Stopping systemd-sysctl.service... Sep 9 00:25:47.393672 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:25:47.394243 systemd[1]: cri-containerd-fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8.scope: Deactivated successfully. Sep 9 00:25:47.407113 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:25:47.422458 env[1214]: time="2025-09-09T00:25:47.422414101Z" level=info msg="shim disconnected" id=fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8 Sep 9 00:25:47.422458 env[1214]: time="2025-09-09T00:25:47.422456667Z" level=warning msg="cleaning up after shim disconnected" id=fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8 namespace=k8s.io Sep 9 00:25:47.422458 env[1214]: time="2025-09-09T00:25:47.422466868Z" level=info msg="cleaning up dead shim" Sep 9 00:25:47.429235 env[1214]: time="2025-09-09T00:25:47.429175171Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:25:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2423 runtime=io.containerd.runc.v2\n" Sep 9 00:25:47.771081 env[1214]: time="2025-09-09T00:25:47.771020187Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:47.772555 env[1214]: time="2025-09-09T00:25:47.772521869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:47.774023 env[1214]: time="2025-09-09T00:25:47.773996948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:25:47.774470 env[1214]: time="2025-09-09T00:25:47.774438407Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:25:47.780457 env[1214]: time="2025-09-09T00:25:47.780422693Z" level=info msg="CreateContainer within sandbox \"2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:25:47.790076 env[1214]: time="2025-09-09T00:25:47.790036107Z" level=info msg="CreateContainer within sandbox \"2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\"" Sep 9 00:25:47.791553 env[1214]: time="2025-09-09T00:25:47.790591381Z" level=info msg="StartContainer for \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\"" Sep 9 00:25:47.804290 systemd[1]: Started cri-containerd-535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15.scope. Sep 9 00:25:47.837876 env[1214]: time="2025-09-09T00:25:47.837831980Z" level=info msg="StartContainer for \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\" returns successfully" Sep 9 00:25:48.002069 update_engine[1207]: I0909 00:25:48.002020 1207 update_attempter.cc:509] Updating boot flags... Sep 9 00:25:48.289140 kubelet[1917]: E0909 00:25:48.289099 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:48.296767 env[1214]: time="2025-09-09T00:25:48.296700072Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:25:48.297085 kubelet[1917]: E0909 00:25:48.296725 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:48.328771 kubelet[1917]: I0909 00:25:48.328680 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7zdnh" podStartSLOduration=1.071251085 podStartE2EDuration="9.328662883s" podCreationTimestamp="2025-09-09 00:25:39 +0000 UTC" firstStartedPulling="2025-09-09 00:25:39.5179993 +0000 UTC m=+7.403337984" lastFinishedPulling="2025-09-09 00:25:47.775411098 +0000 UTC m=+15.660749782" observedRunningTime="2025-09-09 00:25:48.327449608 +0000 UTC m=+16.212788292" watchObservedRunningTime="2025-09-09 00:25:48.328662883 +0000 UTC m=+16.214001567" Sep 9 00:25:48.329372 env[1214]: time="2025-09-09T00:25:48.329317487Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\"" Sep 9 00:25:48.330016 env[1214]: time="2025-09-09T00:25:48.329983652Z" level=info msg="StartContainer for \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\"" Sep 9 00:25:48.358369 systemd[1]: Started cri-containerd-7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243.scope. Sep 9 00:25:48.409549 env[1214]: time="2025-09-09T00:25:48.409490708Z" level=info msg="StartContainer for \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\" returns successfully" Sep 9 00:25:48.415090 systemd[1]: cri-containerd-7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243.scope: Deactivated successfully. Sep 9 00:25:48.451135 env[1214]: time="2025-09-09T00:25:48.451085632Z" level=info msg="shim disconnected" id=7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243 Sep 9 00:25:48.451447 env[1214]: time="2025-09-09T00:25:48.451428356Z" level=warning msg="cleaning up after shim disconnected" id=7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243 namespace=k8s.io Sep 9 00:25:48.451542 env[1214]: time="2025-09-09T00:25:48.451527009Z" level=info msg="cleaning up dead shim" Sep 9 00:25:48.460115 env[1214]: time="2025-09-09T00:25:48.460077463Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:25:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2534 runtime=io.containerd.runc.v2\n" Sep 9 00:25:49.155826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243-rootfs.mount: Deactivated successfully. Sep 9 00:25:49.298454 kubelet[1917]: E0909 00:25:49.297361 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:49.298454 kubelet[1917]: E0909 00:25:49.297514 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:49.304575 env[1214]: time="2025-09-09T00:25:49.304520623Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:25:49.342648 env[1214]: time="2025-09-09T00:25:49.342600661Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\"" Sep 9 00:25:49.344176 env[1214]: time="2025-09-09T00:25:49.343264901Z" level=info msg="StartContainer for \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\"" Sep 9 00:25:49.365622 systemd[1]: Started cri-containerd-3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6.scope. Sep 9 00:25:49.401531 systemd[1]: cri-containerd-3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6.scope: Deactivated successfully. Sep 9 00:25:49.402636 env[1214]: time="2025-09-09T00:25:49.402594727Z" level=info msg="StartContainer for \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\" returns successfully" Sep 9 00:25:49.429676 env[1214]: time="2025-09-09T00:25:49.429556091Z" level=info msg="shim disconnected" id=3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6 Sep 9 00:25:49.429676 env[1214]: time="2025-09-09T00:25:49.429601416Z" level=warning msg="cleaning up after shim disconnected" id=3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6 namespace=k8s.io Sep 9 00:25:49.429676 env[1214]: time="2025-09-09T00:25:49.429611578Z" level=info msg="cleaning up dead shim" Sep 9 00:25:49.439234 env[1214]: time="2025-09-09T00:25:49.439191744Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:25:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2588 runtime=io.containerd.runc.v2\n" Sep 9 00:25:50.155711 systemd[1]: run-containerd-runc-k8s.io-3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6-runc.fObESy.mount: Deactivated successfully. Sep 9 00:25:50.155809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6-rootfs.mount: Deactivated successfully. Sep 9 00:25:50.302360 kubelet[1917]: E0909 00:25:50.301843 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:50.319130 env[1214]: time="2025-09-09T00:25:50.319062333Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:25:50.343172 env[1214]: time="2025-09-09T00:25:50.343124563Z" level=info msg="CreateContainer within sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\"" Sep 9 00:25:50.344108 env[1214]: time="2025-09-09T00:25:50.344066473Z" level=info msg="StartContainer for \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\"" Sep 9 00:25:50.367285 systemd[1]: Started cri-containerd-961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3.scope. Sep 9 00:25:50.398184 env[1214]: time="2025-09-09T00:25:50.398134223Z" level=info msg="StartContainer for \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\" returns successfully" Sep 9 00:25:50.542916 kubelet[1917]: I0909 00:25:50.542391 1917 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:25:50.573936 systemd[1]: Created slice kubepods-burstable-podd80feaee_e9d3_46db_93a7_70adfbc45f3e.slice. Sep 9 00:25:50.575985 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:25:50.579173 systemd[1]: Created slice kubepods-burstable-pod2c799c90_1f16_4e31_97bc_4eb1c438f09a.slice. Sep 9 00:25:50.656175 kubelet[1917]: I0909 00:25:50.656130 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d80feaee-e9d3-46db-93a7-70adfbc45f3e-config-volume\") pod \"coredns-674b8bbfcf-sv6ll\" (UID: \"d80feaee-e9d3-46db-93a7-70adfbc45f3e\") " pod="kube-system/coredns-674b8bbfcf-sv6ll" Sep 9 00:25:50.656175 kubelet[1917]: I0909 00:25:50.656176 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tw8m\" (UniqueName: \"kubernetes.io/projected/2c799c90-1f16-4e31-97bc-4eb1c438f09a-kube-api-access-4tw8m\") pod \"coredns-674b8bbfcf-rwkb9\" (UID: \"2c799c90-1f16-4e31-97bc-4eb1c438f09a\") " pod="kube-system/coredns-674b8bbfcf-rwkb9" Sep 9 00:25:50.656368 kubelet[1917]: I0909 00:25:50.656207 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c799c90-1f16-4e31-97bc-4eb1c438f09a-config-volume\") pod \"coredns-674b8bbfcf-rwkb9\" (UID: \"2c799c90-1f16-4e31-97bc-4eb1c438f09a\") " pod="kube-system/coredns-674b8bbfcf-rwkb9" Sep 9 00:25:50.656368 kubelet[1917]: I0909 00:25:50.656228 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4256\" (UniqueName: \"kubernetes.io/projected/d80feaee-e9d3-46db-93a7-70adfbc45f3e-kube-api-access-p4256\") pod \"coredns-674b8bbfcf-sv6ll\" (UID: \"d80feaee-e9d3-46db-93a7-70adfbc45f3e\") " pod="kube-system/coredns-674b8bbfcf-sv6ll" Sep 9 00:25:50.802995 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:25:50.876813 kubelet[1917]: E0909 00:25:50.876773 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:50.877698 env[1214]: time="2025-09-09T00:25:50.877659515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv6ll,Uid:d80feaee-e9d3-46db-93a7-70adfbc45f3e,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:50.881916 kubelet[1917]: E0909 00:25:50.881890 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:50.883661 env[1214]: time="2025-09-09T00:25:50.883582922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rwkb9,Uid:2c799c90-1f16-4e31-97bc-4eb1c438f09a,Namespace:kube-system,Attempt:0,}" Sep 9 00:25:51.307758 kubelet[1917]: E0909 00:25:51.306992 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:51.321168 kubelet[1917]: I0909 00:25:51.321100 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pmwh9" podStartSLOduration=6.0956684899999996 podStartE2EDuration="13.321083514s" podCreationTimestamp="2025-09-09 00:25:38 +0000 UTC" firstStartedPulling="2025-09-09 00:25:38.914271689 +0000 UTC m=+6.799610373" lastFinishedPulling="2025-09-09 00:25:46.139686713 +0000 UTC m=+14.025025397" observedRunningTime="2025-09-09 00:25:51.320740156 +0000 UTC m=+19.206078840" watchObservedRunningTime="2025-09-09 00:25:51.321083514 +0000 UTC m=+19.206422198" Sep 9 00:25:52.307899 kubelet[1917]: E0909 00:25:52.307831 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:52.416916 systemd-networkd[1051]: cilium_host: Link UP Sep 9 00:25:52.417678 systemd-networkd[1051]: cilium_net: Link UP Sep 9 00:25:52.419242 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 9 00:25:52.419316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 9 00:25:52.419359 systemd-networkd[1051]: cilium_net: Gained carrier Sep 9 00:25:52.419530 systemd-networkd[1051]: cilium_host: Gained carrier Sep 9 00:25:52.503950 systemd-networkd[1051]: cilium_vxlan: Link UP Sep 9 00:25:52.503983 systemd-networkd[1051]: cilium_vxlan: Gained carrier Sep 9 00:25:52.781992 kernel: NET: Registered PF_ALG protocol family Sep 9 00:25:52.936120 systemd-networkd[1051]: cilium_net: Gained IPv6LL Sep 9 00:25:53.257080 systemd-networkd[1051]: cilium_host: Gained IPv6LL Sep 9 00:25:53.310516 kubelet[1917]: E0909 00:25:53.310447 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:53.484713 systemd-networkd[1051]: lxc_health: Link UP Sep 9 00:25:53.502381 systemd-networkd[1051]: lxc_health: Gained carrier Sep 9 00:25:53.503062 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:25:53.896923 systemd-networkd[1051]: cilium_vxlan: Gained IPv6LL Sep 9 00:25:53.924689 systemd-networkd[1051]: lxc2a19d984c589: Link UP Sep 9 00:25:53.932992 kernel: eth0: renamed from tmp412c5 Sep 9 00:25:53.943061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2a19d984c589: link becomes ready Sep 9 00:25:53.945293 systemd-networkd[1051]: lxc2a19d984c589: Gained carrier Sep 9 00:25:53.946056 systemd-networkd[1051]: lxc8682438d944d: Link UP Sep 9 00:25:53.958048 kernel: eth0: renamed from tmp12028 Sep 9 00:25:53.965036 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8682438d944d: link becomes ready Sep 9 00:25:53.964788 systemd-networkd[1051]: lxc8682438d944d: Gained carrier Sep 9 00:25:54.830187 kubelet[1917]: E0909 00:25:54.830070 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:55.048125 systemd-networkd[1051]: lxc8682438d944d: Gained IPv6LL Sep 9 00:25:55.304153 systemd-networkd[1051]: lxc2a19d984c589: Gained IPv6LL Sep 9 00:25:55.314590 kubelet[1917]: E0909 00:25:55.314546 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:55.432183 systemd-networkd[1051]: lxc_health: Gained IPv6LL Sep 9 00:25:56.316504 kubelet[1917]: E0909 00:25:56.316452 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:57.736600 env[1214]: time="2025-09-09T00:25:57.736515085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:57.736600 env[1214]: time="2025-09-09T00:25:57.736593932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:57.736947 env[1214]: time="2025-09-09T00:25:57.736621014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:57.736947 env[1214]: time="2025-09-09T00:25:57.736768546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/412c5dceac0456d46d000b03c211a99d58c32ffb77e7ff947e79bf29043e277e pid=3151 runtime=io.containerd.runc.v2 Sep 9 00:25:57.744050 env[1214]: time="2025-09-09T00:25:57.743944791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:25:57.744050 env[1214]: time="2025-09-09T00:25:57.744010397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:25:57.744050 env[1214]: time="2025-09-09T00:25:57.744020757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:25:57.744366 env[1214]: time="2025-09-09T00:25:57.744330784Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12028e4cd01f19bd1c90f78ea256a954b89bedf604250f30f188271988858754 pid=3161 runtime=io.containerd.runc.v2 Sep 9 00:25:57.755008 systemd[1]: run-containerd-runc-k8s.io-412c5dceac0456d46d000b03c211a99d58c32ffb77e7ff947e79bf29043e277e-runc.gxWIFa.mount: Deactivated successfully. Sep 9 00:25:57.756688 systemd[1]: Started cri-containerd-412c5dceac0456d46d000b03c211a99d58c32ffb77e7ff947e79bf29043e277e.scope. Sep 9 00:25:57.759876 systemd[1]: Started cri-containerd-12028e4cd01f19bd1c90f78ea256a954b89bedf604250f30f188271988858754.scope. Sep 9 00:25:57.778674 systemd-resolved[1163]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:25:57.781549 systemd-resolved[1163]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:25:57.801262 env[1214]: time="2025-09-09T00:25:57.801213496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rwkb9,Uid:2c799c90-1f16-4e31-97bc-4eb1c438f09a,Namespace:kube-system,Attempt:0,} returns sandbox id \"12028e4cd01f19bd1c90f78ea256a954b89bedf604250f30f188271988858754\"" Sep 9 00:25:57.801884 kubelet[1917]: E0909 00:25:57.801853 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:57.805439 env[1214]: time="2025-09-09T00:25:57.805325763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv6ll,Uid:d80feaee-e9d3-46db-93a7-70adfbc45f3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"412c5dceac0456d46d000b03c211a99d58c32ffb77e7ff947e79bf29043e277e\"" Sep 9 00:25:57.806388 kubelet[1917]: E0909 00:25:57.806363 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:57.807651 env[1214]: time="2025-09-09T00:25:57.807608075Z" level=info msg="CreateContainer within sandbox \"12028e4cd01f19bd1c90f78ea256a954b89bedf604250f30f188271988858754\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:25:57.810568 env[1214]: time="2025-09-09T00:25:57.810529721Z" level=info msg="CreateContainer within sandbox \"412c5dceac0456d46d000b03c211a99d58c32ffb77e7ff947e79bf29043e277e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:25:57.825550 env[1214]: time="2025-09-09T00:25:57.825497822Z" level=info msg="CreateContainer within sandbox \"12028e4cd01f19bd1c90f78ea256a954b89bedf604250f30f188271988858754\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d873189439fcf94e1b6bc979dddc5687c931932e825e5957471f49a4a95fe71\"" Sep 9 00:25:57.827997 env[1214]: time="2025-09-09T00:25:57.826465304Z" level=info msg="StartContainer for \"9d873189439fcf94e1b6bc979dddc5687c931932e825e5957471f49a4a95fe71\"" Sep 9 00:25:57.830748 env[1214]: time="2025-09-09T00:25:57.830710942Z" level=info msg="CreateContainer within sandbox \"412c5dceac0456d46d000b03c211a99d58c32ffb77e7ff947e79bf29043e277e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"118f18aeeaa1ddc4e8f99e56f52e6001b5c06039306c330015b06b5b4ea61e26\"" Sep 9 00:25:57.831464 env[1214]: time="2025-09-09T00:25:57.831346155Z" level=info msg="StartContainer for \"118f18aeeaa1ddc4e8f99e56f52e6001b5c06039306c330015b06b5b4ea61e26\"" Sep 9 00:25:57.843676 systemd[1]: Started cri-containerd-9d873189439fcf94e1b6bc979dddc5687c931932e825e5957471f49a4a95fe71.scope. Sep 9 00:25:57.850988 systemd[1]: Started cri-containerd-118f18aeeaa1ddc4e8f99e56f52e6001b5c06039306c330015b06b5b4ea61e26.scope. Sep 9 00:25:57.887158 env[1214]: time="2025-09-09T00:25:57.887108094Z" level=info msg="StartContainer for \"9d873189439fcf94e1b6bc979dddc5687c931932e825e5957471f49a4a95fe71\" returns successfully" Sep 9 00:25:57.893479 env[1214]: time="2025-09-09T00:25:57.893421186Z" level=info msg="StartContainer for \"118f18aeeaa1ddc4e8f99e56f52e6001b5c06039306c330015b06b5b4ea61e26\" returns successfully" Sep 9 00:25:58.321477 kubelet[1917]: E0909 00:25:58.321443 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:58.332695 kubelet[1917]: E0909 00:25:58.332134 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:58.353918 kubelet[1917]: I0909 00:25:58.353856 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rwkb9" podStartSLOduration=19.353838875 podStartE2EDuration="19.353838875s" podCreationTimestamp="2025-09-09 00:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:25:58.337132685 +0000 UTC m=+26.222471369" watchObservedRunningTime="2025-09-09 00:25:58.353838875 +0000 UTC m=+26.239177559" Sep 9 00:25:58.413103 kubelet[1917]: I0909 00:25:58.413026 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sv6ll" podStartSLOduration=19.413010455 podStartE2EDuration="19.413010455s" podCreationTimestamp="2025-09-09 00:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:25:58.356316835 +0000 UTC m=+26.241655479" watchObservedRunningTime="2025-09-09 00:25:58.413010455 +0000 UTC m=+26.298349179" Sep 9 00:25:59.330194 kubelet[1917]: E0909 00:25:59.330156 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:25:59.331459 kubelet[1917]: E0909 00:25:59.331412 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:00.332315 kubelet[1917]: E0909 00:26:00.332286 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:06.928274 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:42434.service. Sep 9 00:26:06.973184 sshd[3312]: Accepted publickey for core from 10.0.0.1 port 42434 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:06.975238 sshd[3312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:06.980881 systemd-logind[1203]: New session 6 of user core. Sep 9 00:26:06.981653 systemd[1]: Started session-6.scope. Sep 9 00:26:07.109688 sshd[3312]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:07.112532 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:42434.service: Deactivated successfully. Sep 9 00:26:07.113344 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:26:07.114020 systemd-logind[1203]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:26:07.115132 systemd-logind[1203]: Removed session 6. Sep 9 00:26:12.123130 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:59640.service. Sep 9 00:26:12.157336 sshd[3331]: Accepted publickey for core from 10.0.0.1 port 59640 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:12.159077 sshd[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:12.163305 systemd-logind[1203]: New session 7 of user core. Sep 9 00:26:12.163850 systemd[1]: Started session-7.scope. Sep 9 00:26:12.301148 sshd[3331]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:12.303318 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:59640.service: Deactivated successfully. Sep 9 00:26:12.304145 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:26:12.304644 systemd-logind[1203]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:26:12.305369 systemd-logind[1203]: Removed session 7. Sep 9 00:26:17.305928 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:59646.service. Sep 9 00:26:17.348157 sshd[3347]: Accepted publickey for core from 10.0.0.1 port 59646 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:17.349979 sshd[3347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:17.357153 systemd-logind[1203]: New session 8 of user core. Sep 9 00:26:17.357861 systemd[1]: Started session-8.scope. Sep 9 00:26:17.506591 sshd[3347]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:17.509618 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:59646.service: Deactivated successfully. Sep 9 00:26:17.510365 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:26:17.511906 systemd-logind[1203]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:26:17.513476 systemd-logind[1203]: Removed session 8. Sep 9 00:26:22.512745 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:42092.service. Sep 9 00:26:22.560743 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 42092 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:22.561919 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:22.566567 systemd[1]: Started session-9.scope. Sep 9 00:26:22.567120 systemd-logind[1203]: New session 9 of user core. Sep 9 00:26:22.706211 sshd[3362]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:22.710704 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:42098.service. Sep 9 00:26:22.715913 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:42092.service: Deactivated successfully. Sep 9 00:26:22.716609 systemd-logind[1203]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:26:22.716680 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:26:22.718145 systemd-logind[1203]: Removed session 9. Sep 9 00:26:22.745492 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 42098 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:22.746710 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:22.749905 systemd-logind[1203]: New session 10 of user core. Sep 9 00:26:22.750945 systemd[1]: Started session-10.scope. Sep 9 00:26:22.905441 sshd[3378]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:22.907375 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:42104.service. Sep 9 00:26:22.909913 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:26:22.911213 systemd-logind[1203]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:26:22.911340 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:42098.service: Deactivated successfully. Sep 9 00:26:22.912763 systemd-logind[1203]: Removed session 10. Sep 9 00:26:22.955481 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 42104 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:22.956816 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:22.961042 systemd-logind[1203]: New session 11 of user core. Sep 9 00:26:22.961430 systemd[1]: Started session-11.scope. Sep 9 00:26:23.073742 sshd[3390]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:23.076387 systemd-logind[1203]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:26:23.076538 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:42104.service: Deactivated successfully. Sep 9 00:26:23.077280 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:26:23.078015 systemd-logind[1203]: Removed session 11. Sep 9 00:26:28.082811 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:42106.service. Sep 9 00:26:28.118847 sshd[3405]: Accepted publickey for core from 10.0.0.1 port 42106 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:28.121944 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:28.125836 systemd-logind[1203]: New session 12 of user core. Sep 9 00:26:28.126314 systemd[1]: Started session-12.scope. Sep 9 00:26:28.246639 sshd[3405]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:28.248986 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:42106.service: Deactivated successfully. Sep 9 00:26:28.249690 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:26:28.250226 systemd-logind[1203]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:26:28.250930 systemd-logind[1203]: Removed session 12. Sep 9 00:26:33.251938 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:35030.service. Sep 9 00:26:33.292639 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 35030 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:33.294565 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:33.298866 systemd-logind[1203]: New session 13 of user core. Sep 9 00:26:33.300947 systemd[1]: Started session-13.scope. Sep 9 00:26:33.445065 sshd[3420]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:33.447929 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:35030.service: Deactivated successfully. Sep 9 00:26:33.448763 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:26:33.449641 systemd-logind[1203]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:26:33.453425 systemd-logind[1203]: Removed session 13. Sep 9 00:26:38.446207 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:35040.service. Sep 9 00:26:38.482337 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 35040 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:38.483662 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:38.488019 systemd-logind[1203]: New session 14 of user core. Sep 9 00:26:38.488074 systemd[1]: Started session-14.scope. Sep 9 00:26:38.628303 sshd[3433]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:38.631350 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:35040.service: Deactivated successfully. Sep 9 00:26:38.631992 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:26:38.632517 systemd-logind[1203]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:26:38.634044 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:35046.service. Sep 9 00:26:38.634830 systemd-logind[1203]: Removed session 14. Sep 9 00:26:38.672716 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 35046 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:38.674313 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:38.679510 systemd-logind[1203]: New session 15 of user core. Sep 9 00:26:38.681851 systemd[1]: Started session-15.scope. Sep 9 00:26:38.871647 sshd[3446]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:38.873749 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:35046.service: Deactivated successfully. Sep 9 00:26:38.874439 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:26:38.874906 systemd-logind[1203]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:26:38.876119 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:35062.service. Sep 9 00:26:38.876737 systemd-logind[1203]: Removed session 15. Sep 9 00:26:38.911971 sshd[3457]: Accepted publickey for core from 10.0.0.1 port 35062 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:38.913971 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:38.917955 systemd-logind[1203]: New session 16 of user core. Sep 9 00:26:38.918641 systemd[1]: Started session-16.scope. Sep 9 00:26:39.587417 sshd[3457]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:39.591832 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:35074.service. Sep 9 00:26:39.592644 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:35062.service: Deactivated successfully. Sep 9 00:26:39.593542 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:26:39.594569 systemd-logind[1203]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:26:39.596454 systemd-logind[1203]: Removed session 16. Sep 9 00:26:39.640038 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:39.641465 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:39.644977 systemd-logind[1203]: New session 17 of user core. Sep 9 00:26:39.646036 systemd[1]: Started session-17.scope. Sep 9 00:26:39.870921 sshd[3476]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:39.874719 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:35084.service. Sep 9 00:26:39.877741 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:35074.service: Deactivated successfully. Sep 9 00:26:39.878604 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:26:39.881126 systemd-logind[1203]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:26:39.886591 systemd-logind[1203]: Removed session 17. Sep 9 00:26:39.909817 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 35084 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:39.911479 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:39.915635 systemd-logind[1203]: New session 18 of user core. Sep 9 00:26:39.916656 systemd[1]: Started session-18.scope. Sep 9 00:26:40.038236 sshd[3489]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:40.040658 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:35084.service: Deactivated successfully. Sep 9 00:26:40.041467 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:26:40.041957 systemd-logind[1203]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:26:40.042721 systemd-logind[1203]: Removed session 18. Sep 9 00:26:45.045042 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:60992.service. Sep 9 00:26:45.081347 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 60992 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:45.083254 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:45.087704 systemd-logind[1203]: New session 19 of user core. Sep 9 00:26:45.088250 systemd[1]: Started session-19.scope. Sep 9 00:26:45.217915 sshd[3506]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:45.221406 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:60992.service: Deactivated successfully. Sep 9 00:26:45.222504 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:26:45.223612 systemd-logind[1203]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:26:45.224718 systemd-logind[1203]: Removed session 19. Sep 9 00:26:50.224764 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:40914.service. Sep 9 00:26:50.262674 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 40914 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:50.263992 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:50.270029 systemd-logind[1203]: New session 20 of user core. Sep 9 00:26:50.271175 systemd[1]: Started session-20.scope. Sep 9 00:26:50.403088 sshd[3519]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:50.405924 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:40914.service: Deactivated successfully. Sep 9 00:26:50.406786 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:26:50.407378 systemd-logind[1203]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:26:50.408232 systemd-logind[1203]: Removed session 20. Sep 9 00:26:51.228224 kubelet[1917]: E0909 00:26:51.228175 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:55.228199 kubelet[1917]: E0909 00:26:55.227878 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:55.407840 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:40922.service. Sep 9 00:26:55.445384 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 40922 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:55.447461 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:55.452457 systemd[1]: Started session-21.scope. Sep 9 00:26:55.453629 systemd-logind[1203]: New session 21 of user core. Sep 9 00:26:55.575199 sshd[3532]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:55.580549 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:40922.service: Deactivated successfully. Sep 9 00:26:55.581181 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:26:55.582109 systemd-logind[1203]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:26:55.585532 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:40928.service. Sep 9 00:26:55.594442 systemd-logind[1203]: Removed session 21. Sep 9 00:26:55.619501 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 40928 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:26:55.621565 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:55.626872 systemd-logind[1203]: New session 22 of user core. Sep 9 00:26:55.628075 systemd[1]: Started session-22.scope. Sep 9 00:26:56.230199 kubelet[1917]: E0909 00:26:56.230124 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:58.229356 kubelet[1917]: E0909 00:26:58.228512 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:58.229744 kubelet[1917]: E0909 00:26:58.229473 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:58.764433 env[1214]: time="2025-09-09T00:26:58.764384593Z" level=info msg="StopContainer for \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\" with timeout 30 (s)" Sep 9 00:26:58.765665 env[1214]: time="2025-09-09T00:26:58.765622438Z" level=info msg="Stop container \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\" with signal terminated" Sep 9 00:26:58.794013 systemd[1]: cri-containerd-535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15.scope: Deactivated successfully. Sep 9 00:26:58.808694 env[1214]: time="2025-09-09T00:26:58.808603575Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:26:58.815852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15-rootfs.mount: Deactivated successfully. Sep 9 00:26:58.819394 env[1214]: time="2025-09-09T00:26:58.819353379Z" level=info msg="StopContainer for \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\" with timeout 2 (s)" Sep 9 00:26:58.819796 env[1214]: time="2025-09-09T00:26:58.819762341Z" level=info msg="Stop container \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\" with signal terminated" Sep 9 00:26:58.826737 systemd-networkd[1051]: lxc_health: Link DOWN Sep 9 00:26:58.826743 systemd-networkd[1051]: lxc_health: Lost carrier Sep 9 00:26:58.855171 env[1214]: time="2025-09-09T00:26:58.855113526Z" level=info msg="shim disconnected" id=535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15 Sep 9 00:26:58.855171 env[1214]: time="2025-09-09T00:26:58.855165047Z" level=warning msg="cleaning up after shim disconnected" id=535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15 namespace=k8s.io Sep 9 00:26:58.855171 env[1214]: time="2025-09-09T00:26:58.855176727Z" level=info msg="cleaning up dead shim" Sep 9 00:26:58.863153 env[1214]: time="2025-09-09T00:26:58.863107519Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3604 runtime=io.containerd.runc.v2\n" Sep 9 00:26:58.864629 systemd[1]: cri-containerd-961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3.scope: Deactivated successfully. Sep 9 00:26:58.864994 systemd[1]: cri-containerd-961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3.scope: Consumed 6.469s CPU time. Sep 9 00:26:58.877162 env[1214]: time="2025-09-09T00:26:58.876426254Z" level=info msg="StopContainer for \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\" returns successfully" Sep 9 00:26:58.877615 env[1214]: time="2025-09-09T00:26:58.877561099Z" level=info msg="StopPodSandbox for \"2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0\"" Sep 9 00:26:58.878582 env[1214]: time="2025-09-09T00:26:58.877648099Z" level=info msg="Container to stop \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.880607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0-shm.mount: Deactivated successfully. Sep 9 00:26:58.887419 systemd[1]: cri-containerd-2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0.scope: Deactivated successfully. Sep 9 00:26:58.892562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3-rootfs.mount: Deactivated successfully. Sep 9 00:26:58.902300 env[1214]: time="2025-09-09T00:26:58.902242441Z" level=info msg="shim disconnected" id=961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3 Sep 9 00:26:58.902300 env[1214]: time="2025-09-09T00:26:58.902291761Z" level=warning msg="cleaning up after shim disconnected" id=961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3 namespace=k8s.io Sep 9 00:26:58.902300 env[1214]: time="2025-09-09T00:26:58.902301361Z" level=info msg="cleaning up dead shim" Sep 9 00:26:58.910919 env[1214]: time="2025-09-09T00:26:58.910870556Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3642 runtime=io.containerd.runc.v2\n" Sep 9 00:26:58.913600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0-rootfs.mount: Deactivated successfully. Sep 9 00:26:58.916851 env[1214]: time="2025-09-09T00:26:58.916747260Z" level=info msg="StopContainer for \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\" returns successfully" Sep 9 00:26:58.917477 env[1214]: time="2025-09-09T00:26:58.917441583Z" level=info msg="StopPodSandbox for \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\"" Sep 9 00:26:58.917526 env[1214]: time="2025-09-09T00:26:58.917504463Z" level=info msg="Container to stop \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.917526 env[1214]: time="2025-09-09T00:26:58.917520783Z" level=info msg="Container to stop \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.917579 env[1214]: time="2025-09-09T00:26:58.917532344Z" level=info msg="Container to stop \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.917579 env[1214]: time="2025-09-09T00:26:58.917544104Z" level=info msg="Container to stop \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.917579 env[1214]: time="2025-09-09T00:26:58.917554184Z" level=info msg="Container to stop \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:26:58.923127 systemd[1]: cri-containerd-df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c.scope: Deactivated successfully. Sep 9 00:26:58.929090 env[1214]: time="2025-09-09T00:26:58.929033311Z" level=info msg="shim disconnected" id=2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0 Sep 9 00:26:58.929090 env[1214]: time="2025-09-09T00:26:58.929084511Z" level=warning msg="cleaning up after shim disconnected" id=2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0 namespace=k8s.io Sep 9 00:26:58.929090 env[1214]: time="2025-09-09T00:26:58.929096631Z" level=info msg="cleaning up dead shim" Sep 9 00:26:58.937100 env[1214]: time="2025-09-09T00:26:58.937055144Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3667 runtime=io.containerd.runc.v2\n" Sep 9 00:26:58.937415 env[1214]: time="2025-09-09T00:26:58.937382505Z" level=info msg="TearDown network for sandbox \"2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0\" successfully" Sep 9 00:26:58.937454 env[1214]: time="2025-09-09T00:26:58.937416545Z" level=info msg="StopPodSandbox for \"2fe06fe804f6689eeba0dd67c0bbb6b5300028fbae7e725420376e21457d91c0\" returns successfully" Sep 9 00:26:58.952946 env[1214]: time="2025-09-09T00:26:58.952885249Z" level=info msg="shim disconnected" id=df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c Sep 9 00:26:58.953244 env[1214]: time="2025-09-09T00:26:58.952994810Z" level=warning msg="cleaning up after shim disconnected" id=df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c namespace=k8s.io Sep 9 00:26:58.953244 env[1214]: time="2025-09-09T00:26:58.953007730Z" level=info msg="cleaning up dead shim" Sep 9 00:26:58.960826 env[1214]: time="2025-09-09T00:26:58.960763002Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:26:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3690 runtime=io.containerd.runc.v2\n" Sep 9 00:26:58.961127 env[1214]: time="2025-09-09T00:26:58.961094483Z" level=info msg="TearDown network for sandbox \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" successfully" Sep 9 00:26:58.961166 env[1214]: time="2025-09-09T00:26:58.961127923Z" level=info msg="StopPodSandbox for \"df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c\" returns successfully" Sep 9 00:26:59.036508 kubelet[1917]: I0909 00:26:59.036377 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ced117e-cc94-4eb5-a11d-164a70205435-cilium-config-path\") pod \"9ced117e-cc94-4eb5-a11d-164a70205435\" (UID: \"9ced117e-cc94-4eb5-a11d-164a70205435\") " Sep 9 00:26:59.036508 kubelet[1917]: I0909 00:26:59.036425 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02d354d5-fc1e-46d1-8030-6be66b8a4427-clustermesh-secrets\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.036508 kubelet[1917]: I0909 00:26:59.036447 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99r92\" (UniqueName: \"kubernetes.io/projected/9ced117e-cc94-4eb5-a11d-164a70205435-kube-api-access-99r92\") pod \"9ced117e-cc94-4eb5-a11d-164a70205435\" (UID: \"9ced117e-cc94-4eb5-a11d-164a70205435\") " Sep 9 00:26:59.036508 kubelet[1917]: I0909 00:26:59.036465 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-config-path\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.036508 kubelet[1917]: I0909 00:26:59.036483 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-run\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.037997 kubelet[1917]: I0909 00:26:59.037871 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-xtables-lock\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038515 kubelet[1917]: I0909 00:26:59.038488 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-kernel\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038580 kubelet[1917]: I0909 00:26:59.038528 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-lib-modules\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038580 kubelet[1917]: I0909 00:26:59.038546 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-bpf-maps\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038580 kubelet[1917]: I0909 00:26:59.038562 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-cgroup\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038580 kubelet[1917]: I0909 00:26:59.038578 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cni-path\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038684 kubelet[1917]: I0909 00:26:59.038591 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-hostproc\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038684 kubelet[1917]: I0909 00:26:59.038613 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-hubble-tls\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038684 kubelet[1917]: I0909 00:26:59.038626 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-etc-cni-netd\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038684 kubelet[1917]: I0909 00:26:59.038643 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj4xg\" (UniqueName: \"kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-kube-api-access-bj4xg\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.038684 kubelet[1917]: I0909 00:26:59.038657 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-net\") pod \"02d354d5-fc1e-46d1-8030-6be66b8a4427\" (UID: \"02d354d5-fc1e-46d1-8030-6be66b8a4427\") " Sep 9 00:26:59.039558 kubelet[1917]: I0909 00:26:59.039520 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.039558 kubelet[1917]: I0909 00:26:59.039533 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.039668 kubelet[1917]: I0909 00:26:59.039594 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.039668 kubelet[1917]: I0909 00:26:59.039614 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.039668 kubelet[1917]: I0909 00:26:59.039631 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.039668 kubelet[1917]: I0909 00:26:59.039644 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.039668 kubelet[1917]: I0909 00:26:59.039658 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cni-path" (OuterVolumeSpecName: "cni-path") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.039822 kubelet[1917]: I0909 00:26:59.039670 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-hostproc" (OuterVolumeSpecName: "hostproc") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.041181 kubelet[1917]: I0909 00:26:59.041144 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.042540 kubelet[1917]: I0909 00:26:59.041377 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:26:59.042540 kubelet[1917]: I0909 00:26:59.042320 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ced117e-cc94-4eb5-a11d-164a70205435-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9ced117e-cc94-4eb5-a11d-164a70205435" (UID: "9ced117e-cc94-4eb5-a11d-164a70205435"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:26:59.042540 kubelet[1917]: I0909 00:26:59.042478 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:26:59.044421 kubelet[1917]: I0909 00:26:59.044371 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ced117e-cc94-4eb5-a11d-164a70205435-kube-api-access-99r92" (OuterVolumeSpecName: "kube-api-access-99r92") pod "9ced117e-cc94-4eb5-a11d-164a70205435" (UID: "9ced117e-cc94-4eb5-a11d-164a70205435"). InnerVolumeSpecName "kube-api-access-99r92". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:26:59.044619 kubelet[1917]: I0909 00:26:59.044584 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:26:59.044894 kubelet[1917]: I0909 00:26:59.044862 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-kube-api-access-bj4xg" (OuterVolumeSpecName: "kube-api-access-bj4xg") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "kube-api-access-bj4xg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:26:59.045004 kubelet[1917]: I0909 00:26:59.044981 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d354d5-fc1e-46d1-8030-6be66b8a4427-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "02d354d5-fc1e-46d1-8030-6be66b8a4427" (UID: "02d354d5-fc1e-46d1-8030-6be66b8a4427"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:26:59.139410 kubelet[1917]: I0909 00:26:59.139343 1917 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139410 kubelet[1917]: I0909 00:26:59.139391 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139410 kubelet[1917]: I0909 00:26:59.139404 1917 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139410 kubelet[1917]: I0909 00:26:59.139414 1917 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139410 kubelet[1917]: I0909 00:26:59.139423 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139431 1917 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139443 1917 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139454 1917 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139461 1917 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139470 1917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bj4xg\" (UniqueName: \"kubernetes.io/projected/02d354d5-fc1e-46d1-8030-6be66b8a4427-kube-api-access-bj4xg\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139482 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139492 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ced117e-cc94-4eb5-a11d-164a70205435-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.139697 kubelet[1917]: I0909 00:26:59.139501 1917 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02d354d5-fc1e-46d1-8030-6be66b8a4427-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.140030 kubelet[1917]: I0909 00:26:59.139510 1917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99r92\" (UniqueName: \"kubernetes.io/projected/9ced117e-cc94-4eb5-a11d-164a70205435-kube-api-access-99r92\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.140030 kubelet[1917]: I0909 00:26:59.139520 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.140030 kubelet[1917]: I0909 00:26:59.139530 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02d354d5-fc1e-46d1-8030-6be66b8a4427-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:26:59.471144 kubelet[1917]: I0909 00:26:59.470927 1917 scope.go:117] "RemoveContainer" containerID="961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3" Sep 9 00:26:59.473062 systemd[1]: Removed slice kubepods-burstable-pod02d354d5_fc1e_46d1_8030_6be66b8a4427.slice. Sep 9 00:26:59.473178 systemd[1]: kubepods-burstable-pod02d354d5_fc1e_46d1_8030_6be66b8a4427.slice: Consumed 6.593s CPU time. Sep 9 00:26:59.475527 env[1214]: time="2025-09-09T00:26:59.475488019Z" level=info msg="RemoveContainer for \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\"" Sep 9 00:26:59.476509 systemd[1]: Removed slice kubepods-besteffort-pod9ced117e_cc94_4eb5_a11d_164a70205435.slice. Sep 9 00:26:59.486241 env[1214]: time="2025-09-09T00:26:59.486178749Z" level=info msg="RemoveContainer for \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\" returns successfully" Sep 9 00:26:59.486530 kubelet[1917]: I0909 00:26:59.486489 1917 scope.go:117] "RemoveContainer" containerID="3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6" Sep 9 00:26:59.488029 env[1214]: time="2025-09-09T00:26:59.487988878Z" level=info msg="RemoveContainer for \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\"" Sep 9 00:26:59.494831 env[1214]: time="2025-09-09T00:26:59.494771790Z" level=info msg="RemoveContainer for \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\" returns successfully" Sep 9 00:26:59.495245 kubelet[1917]: I0909 00:26:59.495141 1917 scope.go:117] "RemoveContainer" containerID="7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243" Sep 9 00:26:59.497666 env[1214]: time="2025-09-09T00:26:59.497587924Z" level=info msg="RemoveContainer for \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\"" Sep 9 00:26:59.504798 env[1214]: time="2025-09-09T00:26:59.504736357Z" level=info msg="RemoveContainer for \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\" returns successfully" Sep 9 00:26:59.505526 kubelet[1917]: I0909 00:26:59.505086 1917 scope.go:117] "RemoveContainer" containerID="fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8" Sep 9 00:26:59.507363 env[1214]: time="2025-09-09T00:26:59.507243209Z" level=info msg="RemoveContainer for \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\"" Sep 9 00:26:59.511833 env[1214]: time="2025-09-09T00:26:59.511772551Z" level=info msg="RemoveContainer for \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\" returns successfully" Sep 9 00:26:59.512244 kubelet[1917]: I0909 00:26:59.512178 1917 scope.go:117] "RemoveContainer" containerID="2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f" Sep 9 00:26:59.514485 env[1214]: time="2025-09-09T00:26:59.513726240Z" level=info msg="RemoveContainer for \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\"" Sep 9 00:26:59.517942 env[1214]: time="2025-09-09T00:26:59.517828620Z" level=info msg="RemoveContainer for \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\" returns successfully" Sep 9 00:26:59.518194 kubelet[1917]: I0909 00:26:59.518096 1917 scope.go:117] "RemoveContainer" containerID="961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3" Sep 9 00:26:59.518449 env[1214]: time="2025-09-09T00:26:59.518374542Z" level=error msg="ContainerStatus for \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\": not found" Sep 9 00:26:59.519065 kubelet[1917]: E0909 00:26:59.519024 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\": not found" containerID="961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3" Sep 9 00:26:59.519144 kubelet[1917]: I0909 00:26:59.519060 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3"} err="failed to get container status \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"961b18e93c89da2dceb7f0d05cbec7a1aab1d02540c1dd6195c586da1b05f6d3\": not found" Sep 9 00:26:59.519181 kubelet[1917]: I0909 00:26:59.519147 1917 scope.go:117] "RemoveContainer" containerID="3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6" Sep 9 00:26:59.519441 env[1214]: time="2025-09-09T00:26:59.519378347Z" level=error msg="ContainerStatus for \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\": not found" Sep 9 00:26:59.519683 kubelet[1917]: E0909 00:26:59.519553 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\": not found" containerID="3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6" Sep 9 00:26:59.519683 kubelet[1917]: I0909 00:26:59.519589 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6"} err="failed to get container status \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cf6338b630ef11351acfd98b84fb775d22fb77c8d25095a58f02db7262a21d6\": not found" Sep 9 00:26:59.519683 kubelet[1917]: I0909 00:26:59.519604 1917 scope.go:117] "RemoveContainer" containerID="7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243" Sep 9 00:26:59.520033 env[1214]: time="2025-09-09T00:26:59.519979070Z" level=error msg="ContainerStatus for \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\": not found" Sep 9 00:26:59.520198 kubelet[1917]: E0909 00:26:59.520177 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\": not found" containerID="7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243" Sep 9 00:26:59.520370 kubelet[1917]: I0909 00:26:59.520202 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243"} err="failed to get container status \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c5abeb013b1eb207836ffcb5905f1c4c241e3a9c9f57bffa9154c63bdd30243\": not found" Sep 9 00:26:59.520370 kubelet[1917]: I0909 00:26:59.520216 1917 scope.go:117] "RemoveContainer" containerID="fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8" Sep 9 00:26:59.520447 env[1214]: time="2025-09-09T00:26:59.520366632Z" level=error msg="ContainerStatus for \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\": not found" Sep 9 00:26:59.521163 kubelet[1917]: E0909 00:26:59.521073 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\": not found" containerID="fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8" Sep 9 00:26:59.521163 kubelet[1917]: I0909 00:26:59.521105 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8"} err="failed to get container status \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fabf441ccefb30e6d0a58d195882a53b67d7e4ae4967ccab890b6be82138c8b8\": not found" Sep 9 00:26:59.521163 kubelet[1917]: I0909 00:26:59.521121 1917 scope.go:117] "RemoveContainer" containerID="2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f" Sep 9 00:26:59.521610 kubelet[1917]: E0909 00:26:59.521422 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\": not found" containerID="2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f" Sep 9 00:26:59.521610 kubelet[1917]: I0909 00:26:59.521441 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f"} err="failed to get container status \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\": not found" Sep 9 00:26:59.521610 kubelet[1917]: I0909 00:26:59.521453 1917 scope.go:117] "RemoveContainer" containerID="535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15" Sep 9 00:26:59.521828 env[1214]: time="2025-09-09T00:26:59.521291596Z" level=error msg="ContainerStatus for \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bb91ebd1063e8a05a88ae9067d77149bd869c32c54ddd6928b7cc40b71cfb1f\": not found" Sep 9 00:26:59.527817 env[1214]: time="2025-09-09T00:26:59.527715427Z" level=info msg="RemoveContainer for \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\"" Sep 9 00:26:59.530913 env[1214]: time="2025-09-09T00:26:59.530872562Z" level=info msg="RemoveContainer for \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\" returns successfully" Sep 9 00:26:59.531510 kubelet[1917]: I0909 00:26:59.531229 1917 scope.go:117] "RemoveContainer" containerID="535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15" Sep 9 00:26:59.531599 env[1214]: time="2025-09-09T00:26:59.531485644Z" level=error msg="ContainerStatus for \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\": not found" Sep 9 00:26:59.531788 kubelet[1917]: E0909 00:26:59.531744 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\": not found" containerID="535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15" Sep 9 00:26:59.531788 kubelet[1917]: I0909 00:26:59.531775 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15"} err="failed to get container status \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\": rpc error: code = NotFound desc = an error occurred when try to find container \"535ef087c042331ecb122ebce00c01076cc41f27be95f7590219c0750d406f15\": not found" Sep 9 00:26:59.772114 systemd[1]: var-lib-kubelet-pods-9ced117e\x2dcc94\x2d4eb5\x2da11d\x2d164a70205435-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d99r92.mount: Deactivated successfully. Sep 9 00:26:59.772211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c-rootfs.mount: Deactivated successfully. Sep 9 00:26:59.772260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df3e96376b6550e059b712704b27b7676e2d5cfcdde83fe1df24988b3cf8f77c-shm.mount: Deactivated successfully. Sep 9 00:26:59.772314 systemd[1]: var-lib-kubelet-pods-02d354d5\x2dfc1e\x2d46d1\x2d8030\x2d6be66b8a4427-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbj4xg.mount: Deactivated successfully. Sep 9 00:26:59.772363 systemd[1]: var-lib-kubelet-pods-02d354d5\x2dfc1e\x2d46d1\x2d8030\x2d6be66b8a4427-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:26:59.772413 systemd[1]: var-lib-kubelet-pods-02d354d5\x2dfc1e\x2d46d1\x2d8030\x2d6be66b8a4427-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:27:00.233333 kubelet[1917]: I0909 00:27:00.233283 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d354d5-fc1e-46d1-8030-6be66b8a4427" path="/var/lib/kubelet/pods/02d354d5-fc1e-46d1-8030-6be66b8a4427/volumes" Sep 9 00:27:00.233978 kubelet[1917]: I0909 00:27:00.233928 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ced117e-cc94-4eb5-a11d-164a70205435" path="/var/lib/kubelet/pods/9ced117e-cc94-4eb5-a11d-164a70205435/volumes" Sep 9 00:27:00.683489 sshd[3545]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:00.687776 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:40928.service: Deactivated successfully. Sep 9 00:27:00.688661 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:27:00.688898 systemd[1]: session-22.scope: Consumed 2.338s CPU time. Sep 9 00:27:00.689310 systemd-logind[1203]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:27:00.690703 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:50470.service. Sep 9 00:27:00.692053 systemd-logind[1203]: Removed session 22. Sep 9 00:27:00.732563 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 50470 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:27:00.734367 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:27:00.742023 systemd-logind[1203]: New session 23 of user core. Sep 9 00:27:00.747043 systemd[1]: Started session-23.scope. Sep 9 00:27:02.277704 kubelet[1917]: E0909 00:27:02.277127 1917 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:27:02.333281 sshd[3710]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:02.336899 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:50472.service. Sep 9 00:27:02.337769 systemd-logind[1203]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:27:02.338765 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:50470.service: Deactivated successfully. Sep 9 00:27:02.339646 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:27:02.339891 systemd[1]: session-23.scope: Consumed 1.438s CPU time. Sep 9 00:27:02.340883 systemd-logind[1203]: Removed session 23. Sep 9 00:27:02.377029 systemd[1]: Created slice kubepods-burstable-pod6c0fa14a_9ec3_4073_9c4b_ebf8b2c91302.slice. Sep 9 00:27:02.379072 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 50472 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:27:02.381466 sshd[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:27:02.385360 systemd-logind[1203]: New session 24 of user core. Sep 9 00:27:02.386459 systemd[1]: Started session-24.scope. Sep 9 00:27:02.457685 kubelet[1917]: I0909 00:27:02.457643 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cni-path\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.457811 kubelet[1917]: I0909 00:27:02.457696 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-etc-cni-netd\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.457811 kubelet[1917]: I0909 00:27:02.457782 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-xtables-lock\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.457880 kubelet[1917]: I0909 00:27:02.457854 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-clustermesh-secrets\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.457924 kubelet[1917]: I0909 00:27:02.457904 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-config-path\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.457999 kubelet[1917]: I0909 00:27:02.457932 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hubble-tls\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.457999 kubelet[1917]: I0909 00:27:02.457951 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bhw\" (UniqueName: \"kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-kube-api-access-79bhw\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458052 kubelet[1917]: I0909 00:27:02.458013 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-bpf-maps\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458052 kubelet[1917]: I0909 00:27:02.458031 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-cgroup\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458110 kubelet[1917]: I0909 00:27:02.458073 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-lib-modules\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458110 kubelet[1917]: I0909 00:27:02.458096 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-kernel\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458162 kubelet[1917]: I0909 00:27:02.458114 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-run\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458162 kubelet[1917]: I0909 00:27:02.458155 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-net\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458204 kubelet[1917]: I0909 00:27:02.458175 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-ipsec-secrets\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.458245 kubelet[1917]: I0909 00:27:02.458220 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hostproc\") pod \"cilium-5s5bc\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " pod="kube-system/cilium-5s5bc" Sep 9 00:27:02.529934 sshd[3721]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:02.536658 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:50472.service: Deactivated successfully. Sep 9 00:27:02.538006 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:27:02.538551 kubelet[1917]: E0909 00:27:02.538505 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-79bhw lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5s5bc" podUID="6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" Sep 9 00:27:02.538685 systemd-logind[1203]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:27:02.540755 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:50484.service. Sep 9 00:27:02.543203 systemd-logind[1203]: Removed session 24. Sep 9 00:27:02.585277 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 50484 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:27:02.587187 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:27:02.592384 systemd[1]: Started session-25.scope. Sep 9 00:27:02.592689 systemd-logind[1203]: New session 25 of user core. Sep 9 00:27:03.568158 kubelet[1917]: I0909 00:27:03.568113 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-lib-modules\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568158 kubelet[1917]: I0909 00:27:03.568166 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cni-path\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568520 kubelet[1917]: I0909 00:27:03.568181 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-net\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568520 kubelet[1917]: I0909 00:27:03.568266 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-run\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568520 kubelet[1917]: I0909 00:27:03.568288 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-ipsec-secrets\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568520 kubelet[1917]: I0909 00:27:03.568293 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cni-path" (OuterVolumeSpecName: "cni-path") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.568520 kubelet[1917]: I0909 00:27:03.568307 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-etc-cni-netd\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568520 kubelet[1917]: I0909 00:27:03.568334 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.568663 kubelet[1917]: I0909 00:27:03.568354 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-cgroup\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568663 kubelet[1917]: I0909 00:27:03.568381 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bhw\" (UniqueName: \"kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-kube-api-access-79bhw\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568663 kubelet[1917]: I0909 00:27:03.568397 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-kernel\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568663 kubelet[1917]: I0909 00:27:03.568419 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-config-path\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568663 kubelet[1917]: I0909 00:27:03.568436 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hostproc\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568663 kubelet[1917]: I0909 00:27:03.568449 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-xtables-lock\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568801 kubelet[1917]: I0909 00:27:03.568465 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hubble-tls\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568801 kubelet[1917]: I0909 00:27:03.568480 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-bpf-maps\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568801 kubelet[1917]: I0909 00:27:03.568496 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-clustermesh-secrets\") pod \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\" (UID: \"6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302\") " Sep 9 00:27:03.568801 kubelet[1917]: I0909 00:27:03.568535 1917 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.568801 kubelet[1917]: I0909 00:27:03.568544 1917 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.569527 kubelet[1917]: I0909 00:27:03.568355 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.569627 kubelet[1917]: I0909 00:27:03.568365 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.569697 kubelet[1917]: I0909 00:27:03.568865 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.569774 kubelet[1917]: I0909 00:27:03.568892 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.569843 kubelet[1917]: I0909 00:27:03.568903 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hostproc" (OuterVolumeSpecName: "hostproc") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.569901 kubelet[1917]: I0909 00:27:03.569478 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.569951 kubelet[1917]: I0909 00:27:03.569811 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.570045 kubelet[1917]: I0909 00:27:03.570027 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:27:03.570475 kubelet[1917]: I0909 00:27:03.570437 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:27:03.572834 systemd[1]: var-lib-kubelet-pods-6c0fa14a\x2d9ec3\x2d4073\x2d9c4b\x2debf8b2c91302-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:27:03.574353 kubelet[1917]: I0909 00:27:03.574325 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:27:03.574592 kubelet[1917]: I0909 00:27:03.574571 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:27:03.574665 systemd[1]: var-lib-kubelet-pods-6c0fa14a\x2d9ec3\x2d4073\x2d9c4b\x2debf8b2c91302-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 9 00:27:03.574770 systemd[1]: var-lib-kubelet-pods-6c0fa14a\x2d9ec3\x2d4073\x2d9c4b\x2debf8b2c91302-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:27:03.575240 kubelet[1917]: I0909 00:27:03.575212 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:27:03.576750 kubelet[1917]: I0909 00:27:03.576717 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-kube-api-access-79bhw" (OuterVolumeSpecName: "kube-api-access-79bhw") pod "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" (UID: "6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302"). InnerVolumeSpecName "kube-api-access-79bhw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:27:03.577622 systemd[1]: var-lib-kubelet-pods-6c0fa14a\x2d9ec3\x2d4073\x2d9c4b\x2debf8b2c91302-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79bhw.mount: Deactivated successfully. Sep 9 00:27:03.669377 kubelet[1917]: I0909 00:27:03.669328 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669377 kubelet[1917]: I0909 00:27:03.669364 1917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79bhw\" (UniqueName: \"kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-kube-api-access-79bhw\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669377 kubelet[1917]: I0909 00:27:03.669378 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669377 kubelet[1917]: I0909 00:27:03.669386 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669395 1917 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669404 1917 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669411 1917 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669419 1917 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669426 1917 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669434 1917 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669441 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669650 kubelet[1917]: I0909 00:27:03.669449 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:03.669837 kubelet[1917]: I0909 00:27:03.669456 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:04.129205 kubelet[1917]: I0909 00:27:04.129144 1917 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:27:04Z","lastTransitionTime":"2025-09-09T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:27:04.233893 systemd[1]: Removed slice kubepods-burstable-pod6c0fa14a_9ec3_4073_9c4b_ebf8b2c91302.slice. Sep 9 00:27:04.541242 systemd[1]: Created slice kubepods-burstable-pod833195a9_8108_4a44_ac22_84b69c55a340.slice. Sep 9 00:27:04.576251 kubelet[1917]: I0909 00:27:04.576200 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-etc-cni-netd\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.576628 kubelet[1917]: I0909 00:27:04.576610 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/833195a9-8108-4a44-ac22-84b69c55a340-cilium-config-path\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.576886 kubelet[1917]: I0909 00:27:04.576867 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-hostproc\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577045 kubelet[1917]: I0909 00:27:04.577031 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/833195a9-8108-4a44-ac22-84b69c55a340-clustermesh-secrets\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577165 kubelet[1917]: I0909 00:27:04.577150 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/833195a9-8108-4a44-ac22-84b69c55a340-cilium-ipsec-secrets\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577261 kubelet[1917]: I0909 00:27:04.577247 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctcxt\" (UniqueName: \"kubernetes.io/projected/833195a9-8108-4a44-ac22-84b69c55a340-kube-api-access-ctcxt\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577361 kubelet[1917]: I0909 00:27:04.577349 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-xtables-lock\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577460 kubelet[1917]: I0909 00:27:04.577448 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-host-proc-sys-kernel\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577569 kubelet[1917]: I0909 00:27:04.577558 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-bpf-maps\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577679 kubelet[1917]: I0909 00:27:04.577659 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-cilium-cgroup\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577784 kubelet[1917]: I0909 00:27:04.577771 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-cni-path\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577883 kubelet[1917]: I0909 00:27:04.577871 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/833195a9-8108-4a44-ac22-84b69c55a340-hubble-tls\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.577990 kubelet[1917]: I0909 00:27:04.577977 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-cilium-run\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.578111 kubelet[1917]: I0909 00:27:04.578098 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-lib-modules\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.578213 kubelet[1917]: I0909 00:27:04.578200 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/833195a9-8108-4a44-ac22-84b69c55a340-host-proc-sys-net\") pod \"cilium-2tfzm\" (UID: \"833195a9-8108-4a44-ac22-84b69c55a340\") " pod="kube-system/cilium-2tfzm" Sep 9 00:27:04.846145 kubelet[1917]: E0909 00:27:04.845546 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:04.847132 env[1214]: time="2025-09-09T00:27:04.846117267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2tfzm,Uid:833195a9-8108-4a44-ac22-84b69c55a340,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:04.865939 env[1214]: time="2025-09-09T00:27:04.865852817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:27:04.865939 env[1214]: time="2025-09-09T00:27:04.865937337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:27:04.866166 env[1214]: time="2025-09-09T00:27:04.866002138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:27:04.866283 env[1214]: time="2025-09-09T00:27:04.866248020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a pid=3766 runtime=io.containerd.runc.v2 Sep 9 00:27:04.886882 systemd[1]: Started cri-containerd-e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a.scope. Sep 9 00:27:04.921614 env[1214]: time="2025-09-09T00:27:04.921571280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2tfzm,Uid:833195a9-8108-4a44-ac22-84b69c55a340,Namespace:kube-system,Attempt:0,} returns sandbox id \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\"" Sep 9 00:27:04.922451 kubelet[1917]: E0909 00:27:04.922426 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:04.927260 env[1214]: time="2025-09-09T00:27:04.927220243Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:27:04.938027 env[1214]: time="2025-09-09T00:27:04.937973244Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca61b2dea9b52386f73bd9ab99366d6b24684c059bdbb62f5a70dc8619a71cb9\"" Sep 9 00:27:04.938716 env[1214]: time="2025-09-09T00:27:04.938688650Z" level=info msg="StartContainer for \"ca61b2dea9b52386f73bd9ab99366d6b24684c059bdbb62f5a70dc8619a71cb9\"" Sep 9 00:27:04.952944 systemd[1]: Started cri-containerd-ca61b2dea9b52386f73bd9ab99366d6b24684c059bdbb62f5a70dc8619a71cb9.scope. Sep 9 00:27:04.984591 env[1214]: time="2025-09-09T00:27:04.984539878Z" level=info msg="StartContainer for \"ca61b2dea9b52386f73bd9ab99366d6b24684c059bdbb62f5a70dc8619a71cb9\" returns successfully" Sep 9 00:27:04.991754 systemd[1]: cri-containerd-ca61b2dea9b52386f73bd9ab99366d6b24684c059bdbb62f5a70dc8619a71cb9.scope: Deactivated successfully. Sep 9 00:27:05.020206 env[1214]: time="2025-09-09T00:27:05.020161478Z" level=info msg="shim disconnected" id=ca61b2dea9b52386f73bd9ab99366d6b24684c059bdbb62f5a70dc8619a71cb9 Sep 9 00:27:05.020206 env[1214]: time="2025-09-09T00:27:05.020204399Z" level=warning msg="cleaning up after shim disconnected" id=ca61b2dea9b52386f73bd9ab99366d6b24684c059bdbb62f5a70dc8619a71cb9 namespace=k8s.io Sep 9 00:27:05.020404 env[1214]: time="2025-09-09T00:27:05.020216559Z" level=info msg="cleaning up dead shim" Sep 9 00:27:05.027507 env[1214]: time="2025-09-09T00:27:05.027465937Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3856 runtime=io.containerd.runc.v2\ntime=\"2025-09-09T00:27:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Sep 9 00:27:05.481070 kubelet[1917]: E0909 00:27:05.481042 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:05.486462 env[1214]: time="2025-09-09T00:27:05.486416580Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:27:05.501966 env[1214]: time="2025-09-09T00:27:05.501913385Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a975501eadbb9b9b6a085637c8531a7a4706ca4e112302a1311bfa2b7b9767dc\"" Sep 9 00:27:05.503826 env[1214]: time="2025-09-09T00:27:05.503766840Z" level=info msg="StartContainer for \"a975501eadbb9b9b6a085637c8531a7a4706ca4e112302a1311bfa2b7b9767dc\"" Sep 9 00:27:05.516788 systemd[1]: Started cri-containerd-a975501eadbb9b9b6a085637c8531a7a4706ca4e112302a1311bfa2b7b9767dc.scope. Sep 9 00:27:05.546271 env[1214]: time="2025-09-09T00:27:05.546178944Z" level=info msg="StartContainer for \"a975501eadbb9b9b6a085637c8531a7a4706ca4e112302a1311bfa2b7b9767dc\" returns successfully" Sep 9 00:27:05.553290 systemd[1]: cri-containerd-a975501eadbb9b9b6a085637c8531a7a4706ca4e112302a1311bfa2b7b9767dc.scope: Deactivated successfully. Sep 9 00:27:05.573690 env[1214]: time="2025-09-09T00:27:05.573637527Z" level=info msg="shim disconnected" id=a975501eadbb9b9b6a085637c8531a7a4706ca4e112302a1311bfa2b7b9767dc Sep 9 00:27:05.573886 env[1214]: time="2025-09-09T00:27:05.573692408Z" level=warning msg="cleaning up after shim disconnected" id=a975501eadbb9b9b6a085637c8531a7a4706ca4e112302a1311bfa2b7b9767dc namespace=k8s.io Sep 9 00:27:05.573886 env[1214]: time="2025-09-09T00:27:05.573707448Z" level=info msg="cleaning up dead shim" Sep 9 00:27:05.580245 env[1214]: time="2025-09-09T00:27:05.580206860Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Sep 9 00:27:06.231234 kubelet[1917]: I0909 00:27:06.231197 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302" path="/var/lib/kubelet/pods/6c0fa14a-9ec3-4073-9c4b-ebf8b2c91302/volumes" Sep 9 00:27:06.484836 kubelet[1917]: E0909 00:27:06.484740 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:06.491051 env[1214]: time="2025-09-09T00:27:06.489975445Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:27:06.506589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490668291.mount: Deactivated successfully. Sep 9 00:27:06.510599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387292585.mount: Deactivated successfully. Sep 9 00:27:06.513591 env[1214]: time="2025-09-09T00:27:06.513527887Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8ff6b28afe069e5fa4d2399c8ae67edc87dae1419a6ed18a31c855ed7c78881e\"" Sep 9 00:27:06.515286 env[1214]: time="2025-09-09T00:27:06.514329214Z" level=info msg="StartContainer for \"8ff6b28afe069e5fa4d2399c8ae67edc87dae1419a6ed18a31c855ed7c78881e\"" Sep 9 00:27:06.531211 systemd[1]: Started cri-containerd-8ff6b28afe069e5fa4d2399c8ae67edc87dae1419a6ed18a31c855ed7c78881e.scope. Sep 9 00:27:06.564428 systemd[1]: cri-containerd-8ff6b28afe069e5fa4d2399c8ae67edc87dae1419a6ed18a31c855ed7c78881e.scope: Deactivated successfully. Sep 9 00:27:06.568015 env[1214]: time="2025-09-09T00:27:06.567938916Z" level=info msg="StartContainer for \"8ff6b28afe069e5fa4d2399c8ae67edc87dae1419a6ed18a31c855ed7c78881e\" returns successfully" Sep 9 00:27:06.588766 env[1214]: time="2025-09-09T00:27:06.588703575Z" level=info msg="shim disconnected" id=8ff6b28afe069e5fa4d2399c8ae67edc87dae1419a6ed18a31c855ed7c78881e Sep 9 00:27:06.588766 env[1214]: time="2025-09-09T00:27:06.588751095Z" level=warning msg="cleaning up after shim disconnected" id=8ff6b28afe069e5fa4d2399c8ae67edc87dae1419a6ed18a31c855ed7c78881e namespace=k8s.io Sep 9 00:27:06.588766 env[1214]: time="2025-09-09T00:27:06.588761535Z" level=info msg="cleaning up dead shim" Sep 9 00:27:06.597006 env[1214]: time="2025-09-09T00:27:06.596954646Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3978 runtime=io.containerd.runc.v2\n" Sep 9 00:27:07.278115 kubelet[1917]: E0909 00:27:07.278078 1917 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:27:07.489994 kubelet[1917]: E0909 00:27:07.489953 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:07.493978 env[1214]: time="2025-09-09T00:27:07.493923769Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:27:07.509566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792937359.mount: Deactivated successfully. Sep 9 00:27:07.512550 env[1214]: time="2025-09-09T00:27:07.512476738Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c\"" Sep 9 00:27:07.513588 env[1214]: time="2025-09-09T00:27:07.513548108Z" level=info msg="StartContainer for \"d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c\"" Sep 9 00:27:07.534930 systemd[1]: Started cri-containerd-d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c.scope. Sep 9 00:27:07.564234 systemd[1]: cri-containerd-d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c.scope: Deactivated successfully. Sep 9 00:27:07.564930 env[1214]: time="2025-09-09T00:27:07.564893455Z" level=info msg="StartContainer for \"d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c\" returns successfully" Sep 9 00:27:07.592416 env[1214]: time="2025-09-09T00:27:07.592369945Z" level=info msg="shim disconnected" id=d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c Sep 9 00:27:07.592828 env[1214]: time="2025-09-09T00:27:07.592802749Z" level=warning msg="cleaning up after shim disconnected" id=d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c namespace=k8s.io Sep 9 00:27:07.592900 env[1214]: time="2025-09-09T00:27:07.592887070Z" level=info msg="cleaning up dead shim" Sep 9 00:27:07.600221 env[1214]: time="2025-09-09T00:27:07.600181016Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:27:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4033 runtime=io.containerd.runc.v2\n" Sep 9 00:27:07.684693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1fee8470e87d30879abb0745b15084f6f4ea8a783df44ce177148315452d04c-rootfs.mount: Deactivated successfully. Sep 9 00:27:08.493766 kubelet[1917]: E0909 00:27:08.493734 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:08.502540 env[1214]: time="2025-09-09T00:27:08.502491780Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:27:08.516596 env[1214]: time="2025-09-09T00:27:08.516546435Z" level=info msg="CreateContainer within sandbox \"e273be49621242c4cf0485fc8dd222b8bf152529a4893ebb4834d855a954650a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76\"" Sep 9 00:27:08.517328 env[1214]: time="2025-09-09T00:27:08.517292362Z" level=info msg="StartContainer for \"d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76\"" Sep 9 00:27:08.533541 systemd[1]: Started cri-containerd-d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76.scope. Sep 9 00:27:08.590398 env[1214]: time="2025-09-09T00:27:08.590350941Z" level=info msg="StartContainer for \"d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76\" returns successfully" Sep 9 00:27:08.684676 systemd[1]: run-containerd-runc-k8s.io-d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76-runc.quBbyE.mount: Deactivated successfully. Sep 9 00:27:08.815104 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 9 00:27:09.498194 kubelet[1917]: E0909 00:27:09.498132 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:10.846650 kubelet[1917]: E0909 00:27:10.846616 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:10.943498 systemd[1]: run-containerd-runc-k8s.io-d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76-runc.zSG3kD.mount: Deactivated successfully. Sep 9 00:27:11.608845 systemd-networkd[1051]: lxc_health: Link UP Sep 9 00:27:11.616045 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:27:11.616187 systemd-networkd[1051]: lxc_health: Gained carrier Sep 9 00:27:12.847371 kubelet[1917]: E0909 00:27:12.847325 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:12.866867 kubelet[1917]: I0909 00:27:12.866787 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2tfzm" podStartSLOduration=8.866769636 podStartE2EDuration="8.866769636s" podCreationTimestamp="2025-09-09 00:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:27:09.52984341 +0000 UTC m=+97.415182054" watchObservedRunningTime="2025-09-09 00:27:12.866769636 +0000 UTC m=+100.752108320" Sep 9 00:27:13.052319 systemd[1]: run-containerd-runc-k8s.io-d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76-runc.8n3YS6.mount: Deactivated successfully. Sep 9 00:27:13.228577 kubelet[1917]: E0909 00:27:13.228535 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:13.257152 systemd-networkd[1051]: lxc_health: Gained IPv6LL Sep 9 00:27:13.509355 kubelet[1917]: E0909 00:27:13.509239 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:14.511069 kubelet[1917]: E0909 00:27:14.511031 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:17.368898 systemd[1]: run-containerd-runc-k8s.io-d3b2c25b141724442599f2de9cb44ff7ac073aa0d9ddb19fe4e588bf7a25bc76-runc.OFVn5y.mount: Deactivated successfully. Sep 9 00:27:17.428436 sshd[3735]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:17.430718 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:50484.service: Deactivated successfully. Sep 9 00:27:17.431421 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:27:17.432072 systemd-logind[1203]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:27:17.433704 systemd-logind[1203]: Removed session 25.