Aug 13 00:14:34.746974 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:14:34.746994 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Aug 12 22:50:30 -00 2025 Aug 13 00:14:34.747002 kernel: efi: EFI v2.70 by EDK II Aug 13 00:14:34.747008 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Aug 13 00:14:34.747012 kernel: random: crng init done Aug 13 00:14:34.747018 kernel: ACPI: Early table checksum verification disabled Aug 13 00:14:34.747024 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Aug 13 00:14:34.747031 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:14:34.747036 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747042 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747047 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747052 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747058 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747064 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747071 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747077 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747083 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:14:34.747088 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 13 00:14:34.747094 kernel: NUMA: Failed to initialise from firmware Aug 13 00:14:34.747100 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:14:34.747105 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Aug 13 00:14:34.747111 kernel: Zone ranges: Aug 13 00:14:34.747117 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:14:34.747123 kernel: DMA32 empty Aug 13 00:14:34.747129 kernel: Normal empty Aug 13 00:14:34.747135 kernel: Movable zone start for each node Aug 13 00:14:34.747141 kernel: Early memory node ranges Aug 13 00:14:34.747146 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Aug 13 00:14:34.747152 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Aug 13 00:14:34.747158 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Aug 13 00:14:34.747163 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Aug 13 00:14:34.747169 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Aug 13 00:14:34.747174 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Aug 13 00:14:34.747180 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Aug 13 00:14:34.747186 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:14:34.747192 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 13 00:14:34.747198 kernel: psci: probing for conduit method from ACPI. Aug 13 00:14:34.747204 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:14:34.747209 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:14:34.747215 kernel: psci: Trusted OS migration not required Aug 13 00:14:34.747223 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:14:34.747229 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:14:34.747236 kernel: ACPI: SRAT not present Aug 13 00:14:34.747243 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Aug 13 00:14:34.747249 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Aug 13 00:14:34.747255 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 13 00:14:34.747261 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:14:34.747267 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:14:34.747273 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:14:34.747279 kernel: CPU features: detected: Spectre-v4 Aug 13 00:14:34.747284 kernel: CPU features: detected: Spectre-BHB Aug 13 00:14:34.747292 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:14:34.747298 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:14:34.747304 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:14:34.747309 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:14:34.747316 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 13 00:14:34.747321 kernel: Policy zone: DMA Aug 13 00:14:34.747328 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:14:34.747335 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:14:34.747341 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:14:34.747347 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:14:34.747354 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:14:34.747361 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Aug 13 00:14:34.747367 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:14:34.747373 kernel: trace event string verifier disabled Aug 13 00:14:34.747379 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:14:34.747385 kernel: rcu: RCU event tracing is enabled. Aug 13 00:14:34.747391 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:14:34.747397 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:14:34.747403 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:14:34.747416 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:14:34.747424 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:14:34.747430 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:14:34.747437 kernel: GICv3: 256 SPIs implemented Aug 13 00:14:34.747443 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:14:34.747449 kernel: GICv3: Distributor has no Range Selector support Aug 13 00:14:34.747455 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:14:34.747461 kernel: GICv3: 16 PPIs implemented Aug 13 00:14:34.747467 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:14:34.747473 kernel: ACPI: SRAT not present Aug 13 00:14:34.747479 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:14:34.747485 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:14:34.747492 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:14:34.747499 kernel: GICv3: using LPI property table @0x00000000400d0000 Aug 13 00:14:34.747505 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Aug 13 00:14:34.747512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:14:34.747518 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:14:34.747525 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:14:34.747531 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:14:34.747537 kernel: arm-pv: using stolen time PV Aug 13 00:14:34.747543 kernel: Console: colour dummy device 80x25 Aug 13 00:14:34.747549 kernel: ACPI: Core revision 20210730 Aug 13 00:14:34.747556 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:14:34.747562 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:14:34.747568 kernel: LSM: Security Framework initializing Aug 13 00:14:34.747575 kernel: SELinux: Initializing. Aug 13 00:14:34.747582 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:14:34.747588 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:14:34.747595 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:14:34.747601 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:14:34.747607 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:14:34.747613 kernel: Remapping and enabling EFI services. Aug 13 00:14:34.747619 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:14:34.747625 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:14:34.747633 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:14:34.747639 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Aug 13 00:14:34.747645 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:14:34.747652 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:14:34.747658 kernel: Detected PIPT I-cache on CPU2 Aug 13 00:14:34.747665 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 13 00:14:34.747671 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Aug 13 00:14:34.747678 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:14:34.747684 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 13 00:14:34.747690 kernel: Detected PIPT I-cache on CPU3 Aug 13 00:14:34.747698 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 13 00:14:34.747704 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Aug 13 00:14:34.747710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:14:34.747717 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 13 00:14:34.747727 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:14:34.747735 kernel: SMP: Total of 4 processors activated. Aug 13 00:14:34.747741 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:14:34.747793 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:14:34.747800 kernel: CPU features: detected: Common not Private translations Aug 13 00:14:34.747807 kernel: CPU features: detected: CRC32 instructions Aug 13 00:14:34.747813 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:14:34.747820 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:14:34.747828 kernel: CPU features: detected: Privileged Access Never Aug 13 00:14:34.747835 kernel: CPU features: detected: RAS Extension Support Aug 13 00:14:34.747841 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:14:34.747848 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:14:34.747855 kernel: alternatives: patching kernel code Aug 13 00:14:34.747863 kernel: devtmpfs: initialized Aug 13 00:14:34.747869 kernel: KASLR enabled Aug 13 00:14:34.747876 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:14:34.747882 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:14:34.747889 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:14:34.747895 kernel: SMBIOS 3.0.0 present. Aug 13 00:14:34.747902 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Aug 13 00:14:34.747908 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:14:34.747914 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:14:34.747922 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:14:34.747929 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:14:34.747936 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:14:34.747942 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Aug 13 00:14:34.747949 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:14:34.747955 kernel: cpuidle: using governor menu Aug 13 00:14:34.747962 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:14:34.747968 kernel: ASID allocator initialised with 32768 entries Aug 13 00:14:34.747975 kernel: ACPI: bus type PCI registered Aug 13 00:14:34.747983 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:14:34.747989 kernel: Serial: AMBA PL011 UART driver Aug 13 00:14:34.747996 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:14:34.748002 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:14:34.748009 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:14:34.748015 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:14:34.748022 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:14:34.748029 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:14:34.748035 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:14:34.748043 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:14:34.748049 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:14:34.748056 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:14:34.748062 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:14:34.748069 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:14:34.748075 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:14:34.748082 kernel: ACPI: Interpreter enabled Aug 13 00:14:34.748088 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:14:34.748094 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:14:34.748103 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:14:34.748109 kernel: printk: console [ttyAMA0] enabled Aug 13 00:14:34.748116 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:14:34.748244 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:14:34.748308 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:14:34.748366 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:14:34.748432 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:14:34.748493 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:14:34.748502 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:14:34.748508 kernel: PCI host bridge to bus 0000:00 Aug 13 00:14:34.748573 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:14:34.748625 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:14:34.748676 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:14:34.748727 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:14:34.748814 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:14:34.748886 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:14:34.748946 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 13 00:14:34.749004 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 13 00:14:34.749062 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:14:34.749119 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:14:34.749177 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 13 00:14:34.749237 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 13 00:14:34.749292 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:14:34.749343 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:14:34.749393 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:14:34.749402 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:14:34.749415 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:14:34.749424 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:14:34.749433 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:14:34.749439 kernel: iommu: Default domain type: Translated Aug 13 00:14:34.749446 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:14:34.749452 kernel: vgaarb: loaded Aug 13 00:14:34.749459 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:14:34.749466 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:14:34.749472 kernel: PTP clock support registered Aug 13 00:14:34.749479 kernel: Registered efivars operations Aug 13 00:14:34.749485 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:14:34.749492 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:14:34.749500 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:14:34.749506 kernel: pnp: PnP ACPI init Aug 13 00:14:34.749585 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:14:34.749595 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:14:34.749601 kernel: NET: Registered PF_INET protocol family Aug 13 00:14:34.749609 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:14:34.749615 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:14:34.749622 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:14:34.749631 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:14:34.749637 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:14:34.749644 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:14:34.749651 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:14:34.749657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:14:34.749664 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:14:34.749670 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:14:34.749677 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:14:34.749684 kernel: kvm [1]: HYP mode not available Aug 13 00:14:34.749691 kernel: Initialise system trusted keyrings Aug 13 00:14:34.749698 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:14:34.749704 kernel: Key type asymmetric registered Aug 13 00:14:34.749710 kernel: Asymmetric key parser 'x509' registered Aug 13 00:14:34.749717 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:14:34.749724 kernel: io scheduler mq-deadline registered Aug 13 00:14:34.749730 kernel: io scheduler kyber registered Aug 13 00:14:34.749737 kernel: io scheduler bfq registered Aug 13 00:14:34.749766 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:14:34.749775 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:14:34.749783 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:14:34.749853 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 13 00:14:34.749862 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:14:34.749869 kernel: thunder_xcv, ver 1.0 Aug 13 00:14:34.749875 kernel: thunder_bgx, ver 1.0 Aug 13 00:14:34.749881 kernel: nicpf, ver 1.0 Aug 13 00:14:34.749888 kernel: nicvf, ver 1.0 Aug 13 00:14:34.749955 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:14:34.750013 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:14:34 UTC (1755044074) Aug 13 00:14:34.750022 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:14:34.750028 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:14:34.750035 kernel: Segment Routing with IPv6 Aug 13 00:14:34.750041 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:14:34.750048 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:14:34.750054 kernel: Key type dns_resolver registered Aug 13 00:14:34.750061 kernel: registered taskstats version 1 Aug 13 00:14:34.750069 kernel: Loading compiled-in X.509 certificates Aug 13 00:14:34.750076 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 72b807ae6dac6ab18c2f4ab9460d3472cf28c19d' Aug 13 00:14:34.750082 kernel: Key type .fscrypt registered Aug 13 00:14:34.750089 kernel: Key type fscrypt-provisioning registered Aug 13 00:14:34.750096 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:14:34.750102 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:14:34.750108 kernel: ima: No architecture policies found Aug 13 00:14:34.750115 kernel: clk: Disabling unused clocks Aug 13 00:14:34.750121 kernel: Freeing unused kernel memory: 36416K Aug 13 00:14:34.750130 kernel: Run /init as init process Aug 13 00:14:34.750136 kernel: with arguments: Aug 13 00:14:34.750143 kernel: /init Aug 13 00:14:34.750149 kernel: with environment: Aug 13 00:14:34.750155 kernel: HOME=/ Aug 13 00:14:34.750162 kernel: TERM=linux Aug 13 00:14:34.750168 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:14:34.750176 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:14:34.750186 systemd[1]: Detected virtualization kvm. Aug 13 00:14:34.750194 systemd[1]: Detected architecture arm64. Aug 13 00:14:34.750201 systemd[1]: Running in initrd. Aug 13 00:14:34.750208 systemd[1]: No hostname configured, using default hostname. Aug 13 00:14:34.750214 systemd[1]: Hostname set to . Aug 13 00:14:34.750222 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:14:34.750229 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:14:34.750236 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:14:34.750244 systemd[1]: Reached target cryptsetup.target. Aug 13 00:14:34.750251 systemd[1]: Reached target paths.target. Aug 13 00:14:34.750258 systemd[1]: Reached target slices.target. Aug 13 00:14:34.750265 systemd[1]: Reached target swap.target. Aug 13 00:14:34.750272 systemd[1]: Reached target timers.target. Aug 13 00:14:34.750280 systemd[1]: Listening on iscsid.socket. Aug 13 00:14:34.750287 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:14:34.750296 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:14:34.750303 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:14:34.750310 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:14:34.750317 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:14:34.750324 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:14:34.750332 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:14:34.750339 systemd[1]: Reached target sockets.target. Aug 13 00:14:34.750346 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:14:34.750353 systemd[1]: Finished network-cleanup.service. Aug 13 00:14:34.750361 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:14:34.750368 systemd[1]: Starting systemd-journald.service... Aug 13 00:14:34.750374 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:14:34.750381 systemd[1]: Starting systemd-resolved.service... Aug 13 00:14:34.750388 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:14:34.750395 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:14:34.750403 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:14:34.750418 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:14:34.750426 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:14:34.750434 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:14:34.750445 systemd-journald[290]: Journal started Aug 13 00:14:34.750488 systemd-journald[290]: Runtime Journal (/run/log/journal/c6fcfdbdba3541558b80364da7e6d337) is 6.0M, max 48.7M, 42.6M free. Aug 13 00:14:34.742420 systemd-modules-load[291]: Inserted module 'overlay' Aug 13 00:14:34.761547 systemd[1]: Started systemd-journald.service. Aug 13 00:14:34.761590 kernel: audit: type=1130 audit(1755044074.754:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.761962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:14:34.767662 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:14:34.767682 kernel: audit: type=1130 audit(1755044074.764:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.769495 kernel: Bridge firewalling registered Aug 13 00:14:34.768478 systemd-modules-load[291]: Inserted module 'br_netfilter' Aug 13 00:14:34.768653 systemd-resolved[292]: Positive Trust Anchors: Aug 13 00:14:34.768660 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:14:34.768686 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:14:34.772965 systemd-resolved[292]: Defaulting to hostname 'linux'. Aug 13 00:14:34.780428 kernel: SCSI subsystem initialized Aug 13 00:14:34.773809 systemd[1]: Started systemd-resolved.service. Aug 13 00:14:34.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.783018 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:14:34.787197 kernel: audit: type=1130 audit(1755044074.779:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.787229 kernel: audit: type=1130 audit(1755044074.784:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.787225 systemd[1]: Reached target nss-lookup.target. Aug 13 00:14:34.790588 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:14:34.790614 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:14:34.790623 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:14:34.790230 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:14:34.794492 systemd-modules-load[291]: Inserted module 'dm_multipath' Aug 13 00:14:34.795255 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:14:34.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.798547 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:14:34.801439 kernel: audit: type=1130 audit(1755044074.796:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.801553 dracut-cmdline[309]: dracut-dracut-053 Aug 13 00:14:34.803609 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:14:34.809314 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:14:34.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.813787 kernel: audit: type=1130 audit(1755044074.810:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.864769 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:14:34.876771 kernel: iscsi: registered transport (tcp) Aug 13 00:14:34.891771 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:14:34.891784 kernel: QLogic iSCSI HBA Driver Aug 13 00:14:34.925183 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:14:34.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.926902 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:14:34.930254 kernel: audit: type=1130 audit(1755044074.925:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:34.971787 kernel: raid6: neonx8 gen() 12247 MB/s Aug 13 00:14:34.988758 kernel: raid6: neonx8 xor() 10762 MB/s Aug 13 00:14:35.005762 kernel: raid6: neonx4 gen() 13535 MB/s Aug 13 00:14:35.022763 kernel: raid6: neonx4 xor() 11145 MB/s Aug 13 00:14:35.039762 kernel: raid6: neonx2 gen() 12943 MB/s Aug 13 00:14:35.056759 kernel: raid6: neonx2 xor() 10322 MB/s Aug 13 00:14:35.073766 kernel: raid6: neonx1 gen() 10580 MB/s Aug 13 00:14:35.090764 kernel: raid6: neonx1 xor() 8768 MB/s Aug 13 00:14:35.107770 kernel: raid6: int64x8 gen() 6262 MB/s Aug 13 00:14:35.124792 kernel: raid6: int64x8 xor() 3535 MB/s Aug 13 00:14:35.141787 kernel: raid6: int64x4 gen() 7212 MB/s Aug 13 00:14:35.158778 kernel: raid6: int64x4 xor() 3853 MB/s Aug 13 00:14:35.175772 kernel: raid6: int64x2 gen() 6142 MB/s Aug 13 00:14:35.192776 kernel: raid6: int64x2 xor() 3317 MB/s Aug 13 00:14:35.209770 kernel: raid6: int64x1 gen() 5037 MB/s Aug 13 00:14:35.226838 kernel: raid6: int64x1 xor() 2642 MB/s Aug 13 00:14:35.226849 kernel: raid6: using algorithm neonx4 gen() 13535 MB/s Aug 13 00:14:35.226858 kernel: raid6: .... xor() 11145 MB/s, rmw enabled Aug 13 00:14:35.227920 kernel: raid6: using neon recovery algorithm Aug 13 00:14:35.239165 kernel: xor: measuring software checksum speed Aug 13 00:14:35.239185 kernel: 8regs : 17199 MB/sec Aug 13 00:14:35.239194 kernel: 32regs : 20707 MB/sec Aug 13 00:14:35.239790 kernel: arm64_neon : 27570 MB/sec Aug 13 00:14:35.239802 kernel: xor: using function: arm64_neon (27570 MB/sec) Aug 13 00:14:35.296785 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Aug 13 00:14:35.306861 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:14:35.311416 kernel: audit: type=1130 audit(1755044075.306:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:35.311446 kernel: audit: type=1334 audit(1755044075.309:10): prog-id=7 op=LOAD Aug 13 00:14:35.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:35.309000 audit: BPF prog-id=7 op=LOAD Aug 13 00:14:35.310000 audit: BPF prog-id=8 op=LOAD Aug 13 00:14:35.311789 systemd[1]: Starting systemd-udevd.service... Aug 13 00:14:35.324192 systemd-udevd[492]: Using default interface naming scheme 'v252'. Aug 13 00:14:35.327664 systemd[1]: Started systemd-udevd.service. Aug 13 00:14:35.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:35.329305 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:14:35.340527 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Aug 13 00:14:35.368729 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:14:35.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:35.370424 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:14:35.403819 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:14:35.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:35.438550 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:14:35.445684 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:14:35.445698 kernel: GPT:9289727 != 19775487 Aug 13 00:14:35.445713 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:14:35.445723 kernel: GPT:9289727 != 19775487 Aug 13 00:14:35.445730 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:14:35.445754 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:14:35.458581 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:14:35.462200 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:14:35.464037 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (549) Aug 13 00:14:35.466980 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:14:35.468063 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:14:35.474726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:14:35.476440 systemd[1]: Starting disk-uuid.service... Aug 13 00:14:35.482438 disk-uuid[563]: Primary Header is updated. Aug 13 00:14:35.482438 disk-uuid[563]: Secondary Entries is updated. Aug 13 00:14:35.482438 disk-uuid[563]: Secondary Header is updated. Aug 13 00:14:35.485827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:14:36.499766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:14:36.499989 disk-uuid[564]: The operation has completed successfully. Aug 13 00:14:36.532328 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:14:36.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.532438 systemd[1]: Finished disk-uuid.service. Aug 13 00:14:36.534200 systemd[1]: Starting verity-setup.service... Aug 13 00:14:36.555806 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:14:36.587571 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:14:36.590794 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:14:36.592626 systemd[1]: Finished verity-setup.service. Aug 13 00:14:36.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.644656 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:14:36.646132 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:14:36.645530 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:14:36.646244 systemd[1]: Starting ignition-setup.service... Aug 13 00:14:36.648853 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:14:36.654795 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:14:36.654832 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:14:36.654842 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:14:36.666505 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:14:36.673674 systemd[1]: Finished ignition-setup.service. Aug 13 00:14:36.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.675436 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:14:36.735787 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:14:36.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.736000 audit: BPF prog-id=9 op=LOAD Aug 13 00:14:36.737851 systemd[1]: Starting systemd-networkd.service... Aug 13 00:14:36.766487 systemd-networkd[739]: lo: Link UP Aug 13 00:14:36.766497 systemd-networkd[739]: lo: Gained carrier Aug 13 00:14:36.767152 systemd-networkd[739]: Enumeration completed Aug 13 00:14:36.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.767244 systemd[1]: Started systemd-networkd.service. Aug 13 00:14:36.768722 systemd[1]: Reached target network.target. Aug 13 00:14:36.772474 ignition[652]: Ignition 2.14.0 Aug 13 00:14:36.770681 systemd[1]: Starting iscsiuio.service... Aug 13 00:14:36.772481 ignition[652]: Stage: fetch-offline Aug 13 00:14:36.771292 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:36.772526 ignition[652]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:36.772673 systemd-networkd[739]: eth0: Link UP Aug 13 00:14:36.772536 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:14:36.772677 systemd-networkd[739]: eth0: Gained carrier Aug 13 00:14:36.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.772717 ignition[652]: parsed url from cmdline: "" Aug 13 00:14:36.781093 systemd[1]: Started iscsiuio.service. Aug 13 00:14:36.772720 ignition[652]: no config URL provided Aug 13 00:14:36.785742 systemd[1]: Starting iscsid.service... Aug 13 00:14:36.772725 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:14:36.789986 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:14:36.789986 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:14:36.789986 iscsid[746]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:14:36.789986 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:14:36.789986 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:14:36.789986 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:14:36.789986 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:14:36.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.772732 ignition[652]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:14:36.792069 systemd[1]: Started iscsid.service. Aug 13 00:14:36.772769 ignition[652]: op(1): [started] loading QEMU firmware config module Aug 13 00:14:36.796717 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:14:36.772774 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:14:36.798827 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:14:36.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.782336 ignition[652]: op(1): [finished] loading QEMU firmware config module Aug 13 00:14:36.807315 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:14:36.782359 ignition[652]: QEMU firmware config was not found. Ignoring... Aug 13 00:14:36.809047 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:14:36.811203 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:14:36.813093 systemd[1]: Reached target remote-fs.target. Aug 13 00:14:36.818538 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:14:36.827184 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:14:36.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.846612 ignition[652]: parsing config with SHA512: 0cd31c5df742ece1b7487adc81b2424d4416fd6c0ea87fd65382dd467ac2d256190e2fe5d1552f4657c452d5cff3ed1c25af658d93f8005d8a4d6a1a5f034e87 Aug 13 00:14:36.857097 unknown[652]: fetched base config from "system" Aug 13 00:14:36.857600 ignition[652]: fetch-offline: fetch-offline passed Aug 13 00:14:36.857110 unknown[652]: fetched user config from "qemu" Aug 13 00:14:36.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.857649 ignition[652]: Ignition finished successfully Aug 13 00:14:36.858774 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:14:36.860302 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:14:36.861116 systemd[1]: Starting ignition-kargs.service... Aug 13 00:14:36.870835 ignition[761]: Ignition 2.14.0 Aug 13 00:14:36.870852 ignition[761]: Stage: kargs Aug 13 00:14:36.870956 ignition[761]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:36.873117 systemd[1]: Finished ignition-kargs.service. Aug 13 00:14:36.870966 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:14:36.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.875450 systemd[1]: Starting ignition-disks.service... Aug 13 00:14:36.871869 ignition[761]: kargs: kargs passed Aug 13 00:14:36.871912 ignition[761]: Ignition finished successfully Aug 13 00:14:36.882597 ignition[767]: Ignition 2.14.0 Aug 13 00:14:36.882607 ignition[767]: Stage: disks Aug 13 00:14:36.882710 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:36.882719 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:14:36.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.884531 systemd[1]: Finished ignition-disks.service. Aug 13 00:14:36.883689 ignition[767]: disks: disks passed Aug 13 00:14:36.885978 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:14:36.883737 ignition[767]: Ignition finished successfully Aug 13 00:14:36.887612 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:14:36.889047 systemd[1]: Reached target local-fs.target. Aug 13 00:14:36.890383 systemd[1]: Reached target sysinit.target. Aug 13 00:14:36.891829 systemd[1]: Reached target basic.target. Aug 13 00:14:36.894231 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:14:36.907106 systemd-fsck[775]: ROOT: clean, 629/553520 files, 56026/553472 blocks Aug 13 00:14:36.910866 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:14:36.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.913010 systemd[1]: Mounting sysroot.mount... Aug 13 00:14:36.921795 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:14:36.921937 systemd[1]: Mounted sysroot.mount. Aug 13 00:14:36.922770 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:14:36.925806 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:14:36.926926 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:14:36.927025 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:14:36.927048 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:14:36.932282 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:14:36.934147 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:14:36.940882 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:14:36.946166 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:14:36.951357 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:14:36.955284 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:14:36.985513 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:14:36.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:36.987315 systemd[1]: Starting ignition-mount.service... Aug 13 00:14:36.989291 systemd[1]: Starting sysroot-boot.service... Aug 13 00:14:36.996116 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:14:37.007167 ignition[828]: INFO : Ignition 2.14.0 Aug 13 00:14:37.007167 ignition[828]: INFO : Stage: mount Aug 13 00:14:37.009052 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:37.009052 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:14:37.011709 ignition[828]: INFO : mount: mount passed Aug 13 00:14:37.011709 ignition[828]: INFO : Ignition finished successfully Aug 13 00:14:37.012035 systemd[1]: Finished ignition-mount.service. Aug 13 00:14:37.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:37.020278 systemd[1]: Finished sysroot-boot.service. Aug 13 00:14:37.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:37.605180 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:14:37.612032 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (836) Aug 13 00:14:37.612075 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:14:37.612086 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:14:37.612798 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:14:37.616359 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:14:37.618113 systemd[1]: Starting ignition-files.service... Aug 13 00:14:37.631951 ignition[856]: INFO : Ignition 2.14.0 Aug 13 00:14:37.631951 ignition[856]: INFO : Stage: files Aug 13 00:14:37.633772 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:37.633772 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:14:37.633772 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:14:37.640109 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:14:37.640109 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:14:37.643854 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:14:37.645285 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:14:37.646906 unknown[856]: wrote ssh authorized keys file for user: core Aug 13 00:14:37.648117 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:14:37.648117 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:14:37.648117 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:14:37.648117 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:14:37.648117 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:14:37.711325 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:14:38.100006 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:14:38.104126 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:14:38.104126 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 13 00:14:38.365847 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 13 00:14:38.424124 systemd-networkd[739]: eth0: Gained IPv6LL Aug 13 00:14:38.473245 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:14:38.475089 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:14:38.489686 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:14:38.489686 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:14:38.489686 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:14:38.489686 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:14:38.489686 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:14:38.489686 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:14:38.772226 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Aug 13 00:14:39.132635 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:14:39.132635 ignition[856]: INFO : files: op(d): [started] processing unit "containerd.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(d): [finished] processing unit "containerd.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:14:39.136921 ignition[856]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:14:39.184799 ignition[856]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:14:39.186391 ignition[856]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:14:39.186391 ignition[856]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:14:39.186391 ignition[856]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:14:39.186391 ignition[856]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:14:39.186391 ignition[856]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:14:39.186391 ignition[856]: INFO : files: files passed Aug 13 00:14:39.186391 ignition[856]: INFO : Ignition finished successfully Aug 13 00:14:39.203032 kernel: kauditd_printk_skb: 22 callbacks suppressed Aug 13 00:14:39.203055 kernel: audit: type=1130 audit(1755044079.189:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.188477 systemd[1]: Finished ignition-files.service. Aug 13 00:14:39.209541 kernel: audit: type=1130 audit(1755044079.203:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.209565 kernel: audit: type=1131 audit(1755044079.203:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.192081 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:14:39.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.214183 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 00:14:39.216164 kernel: audit: type=1130 audit(1755044079.209:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.196931 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:14:39.218633 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:14:39.197836 systemd[1]: Starting ignition-quench.service... Aug 13 00:14:39.201692 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:14:39.201797 systemd[1]: Finished ignition-quench.service. Aug 13 00:14:39.206613 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:14:39.210630 systemd[1]: Reached target ignition-complete.target. Aug 13 00:14:39.215673 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:14:39.234264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:14:39.234362 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:14:39.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.236330 systemd[1]: Reached target initrd-fs.target. Aug 13 00:14:39.243508 kernel: audit: type=1130 audit(1755044079.236:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.243535 kernel: audit: type=1131 audit(1755044079.236:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.242815 systemd[1]: Reached target initrd.target. Aug 13 00:14:39.244340 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:14:39.245236 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:14:39.259622 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:14:39.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.263784 kernel: audit: type=1130 audit(1755044079.260:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.261417 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:14:39.272691 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:14:39.273695 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:14:39.275257 systemd[1]: Stopped target timers.target. Aug 13 00:14:39.276718 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:14:39.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.276853 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:14:39.282796 kernel: audit: type=1131 audit(1755044079.277:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.278211 systemd[1]: Stopped target initrd.target. Aug 13 00:14:39.282180 systemd[1]: Stopped target basic.target. Aug 13 00:14:39.283579 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:14:39.285071 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:14:39.286573 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:14:39.288168 systemd[1]: Stopped target remote-fs.target. Aug 13 00:14:39.289627 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:14:39.291283 systemd[1]: Stopped target sysinit.target. Aug 13 00:14:39.292678 systemd[1]: Stopped target local-fs.target. Aug 13 00:14:39.294166 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:14:39.295558 systemd[1]: Stopped target swap.target. Aug 13 00:14:39.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.296877 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:14:39.303245 kernel: audit: type=1131 audit(1755044079.298:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.297001 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:14:39.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.298480 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:14:39.308800 kernel: audit: type=1131 audit(1755044079.303:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.302428 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:14:39.302544 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:14:39.304193 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:14:39.304288 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:14:39.308263 systemd[1]: Stopped target paths.target. Aug 13 00:14:39.309597 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:14:39.313780 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:14:39.314799 systemd[1]: Stopped target slices.target. Aug 13 00:14:39.316393 systemd[1]: Stopped target sockets.target. Aug 13 00:14:39.317878 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:14:39.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.317949 systemd[1]: Closed iscsid.socket. Aug 13 00:14:39.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.319139 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:14:39.319239 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:14:39.320616 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:14:39.320700 systemd[1]: Stopped ignition-files.service. Aug 13 00:14:39.322963 systemd[1]: Stopping ignition-mount.service... Aug 13 00:14:39.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.333784 ignition[896]: INFO : Ignition 2.14.0 Aug 13 00:14:39.333784 ignition[896]: INFO : Stage: umount Aug 13 00:14:39.333784 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:39.333784 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:14:39.333784 ignition[896]: INFO : umount: umount passed Aug 13 00:14:39.333784 ignition[896]: INFO : Ignition finished successfully Aug 13 00:14:39.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.324489 systemd[1]: Stopping iscsiuio.service... Aug 13 00:14:39.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.327644 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:14:39.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.328951 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:14:39.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.329075 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:14:39.330604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:14:39.330698 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:14:39.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.336056 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:14:39.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.336167 systemd[1]: Stopped iscsiuio.service. Aug 13 00:14:39.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.340177 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:14:39.341315 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:14:39.341412 systemd[1]: Stopped ignition-mount.service. Aug 13 00:14:39.382000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:14:39.342724 systemd[1]: Stopped target network.target. Aug 13 00:14:39.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.345551 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:14:39.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.345587 systemd[1]: Closed iscsiuio.socket. Aug 13 00:14:39.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.347291 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:14:39.347339 systemd[1]: Stopped ignition-disks.service. Aug 13 00:14:39.352255 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:14:39.352308 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:14:39.355358 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:14:39.355408 systemd[1]: Stopped ignition-setup.service. Aug 13 00:14:39.360786 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:14:39.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.362614 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:14:39.368494 systemd-networkd[739]: eth0: DHCPv6 lease lost Aug 13 00:14:39.403000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:14:39.369829 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:14:39.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.369918 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:14:39.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.374617 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:14:39.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.374720 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:14:39.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.376474 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:14:39.376586 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:14:39.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.378976 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:14:39.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.379011 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:14:39.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.381373 systemd[1]: Stopping network-cleanup.service... Aug 13 00:14:39.382583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:14:39.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:39.382648 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:14:39.384316 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:14:39.384360 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:14:39.387524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:14:39.387577 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:14:39.389270 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:14:39.393239 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:14:39.396501 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:14:39.396665 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:14:39.434000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:14:39.434000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:14:39.434000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:14:39.398432 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:14:39.398536 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:14:39.400056 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:14:39.437000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:14:39.437000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:14:39.400149 systemd[1]: Stopped network-cleanup.service. Aug 13 00:14:39.401526 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:14:39.401563 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:14:39.403263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:14:39.403298 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:14:39.404880 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:14:39.404933 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:14:39.406414 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:14:39.406458 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:14:39.408498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:14:39.408541 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:14:39.410643 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:14:39.410685 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:14:39.414433 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:14:39.415334 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:14:39.415408 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:14:39.455604 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Aug 13 00:14:39.455641 iscsid[746]: iscsid shutting down. Aug 13 00:14:39.417984 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:14:39.418033 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:14:39.419019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:14:39.419061 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:14:39.421890 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:14:39.422337 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:14:39.422440 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:14:39.424138 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:14:39.426547 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:14:39.434364 systemd[1]: Switching root. Aug 13 00:14:39.464685 systemd-journald[290]: Journal stopped Aug 13 00:14:41.617460 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:14:41.617518 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:14:41.617531 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:14:41.617541 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:14:41.617551 kernel: SELinux: policy capability open_perms=1 Aug 13 00:14:41.617564 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:14:41.617575 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:14:41.617585 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:14:41.617594 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:14:41.617603 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:14:41.617613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:14:41.617624 systemd[1]: Successfully loaded SELinux policy in 37.130ms. Aug 13 00:14:41.617645 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.051ms. Aug 13 00:14:41.617657 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:14:41.617669 systemd[1]: Detected virtualization kvm. Aug 13 00:14:41.617680 systemd[1]: Detected architecture arm64. Aug 13 00:14:41.617692 systemd[1]: Detected first boot. Aug 13 00:14:41.617702 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:14:41.617712 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:14:41.617722 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:14:41.617733 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:14:41.617755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:14:41.617770 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:14:41.617781 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:14:41.617792 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:14:41.617802 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:14:41.617813 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:14:41.617823 systemd[1]: Created slice system-getty.slice. Aug 13 00:14:41.617834 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:14:41.617847 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:14:41.617857 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:14:41.617868 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:14:41.617878 systemd[1]: Created slice user.slice. Aug 13 00:14:41.617894 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:14:41.617905 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:14:41.617915 systemd[1]: Set up automount boot.automount. Aug 13 00:14:41.617925 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:14:41.617936 systemd[1]: Reached target integritysetup.target. Aug 13 00:14:41.617947 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:14:41.617958 systemd[1]: Reached target remote-fs.target. Aug 13 00:14:41.617968 systemd[1]: Reached target slices.target. Aug 13 00:14:41.617979 systemd[1]: Reached target swap.target. Aug 13 00:14:41.617989 systemd[1]: Reached target torcx.target. Aug 13 00:14:41.617999 systemd[1]: Reached target veritysetup.target. Aug 13 00:14:41.618010 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:14:41.618020 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:14:41.618031 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:14:41.618042 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:14:41.618052 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:14:41.618062 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:14:41.618072 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:14:41.618082 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:14:41.618093 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:14:41.618103 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:14:41.618114 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:14:41.618125 systemd[1]: Mounting media.mount... Aug 13 00:14:41.618137 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:14:41.618147 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:14:41.618158 systemd[1]: Mounting tmp.mount... Aug 13 00:14:41.618170 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:14:41.618180 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:14:41.618191 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:14:41.618202 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:14:41.618216 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:14:41.618227 systemd[1]: Starting modprobe@drm.service... Aug 13 00:14:41.618238 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:14:41.618249 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:14:41.618259 systemd[1]: Starting modprobe@loop.service... Aug 13 00:14:41.618270 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:14:41.618281 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:14:41.618293 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:14:41.618303 systemd[1]: Starting systemd-journald.service... Aug 13 00:14:41.618313 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:14:41.618324 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:14:41.618336 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:14:41.618347 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:14:41.618357 kernel: loop: module loaded Aug 13 00:14:41.618368 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:14:41.618380 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:14:41.618395 systemd[1]: Mounted media.mount. Aug 13 00:14:41.618406 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:14:41.618417 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:14:41.618427 kernel: fuse: init (API version 7.34) Aug 13 00:14:41.618437 systemd[1]: Mounted tmp.mount. Aug 13 00:14:41.618447 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:14:41.618459 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:14:41.618470 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:14:41.618481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:41.618492 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:14:41.618505 systemd-journald[1023]: Journal started Aug 13 00:14:41.618548 systemd-journald[1023]: Runtime Journal (/run/log/journal/c6fcfdbdba3541558b80364da7e6d337) is 6.0M, max 48.7M, 42.6M free. Aug 13 00:14:41.500000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:14:41.500000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:14:41.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.616000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:14:41.616000 audit[1023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffde844630 a2=4000 a3=1 items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:14:41.616000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:14:41.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.621702 systemd[1]: Started systemd-journald.service. Aug 13 00:14:41.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.622349 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:14:41.622616 systemd[1]: Finished modprobe@drm.service. Aug 13 00:14:41.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.623793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:41.624014 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:14:41.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.625246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:14:41.625450 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:14:41.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.626635 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:41.626992 systemd[1]: Finished modprobe@loop.service. Aug 13 00:14:41.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.628262 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:14:41.629648 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:14:41.631152 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:14:41.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.632657 systemd[1]: Reached target network-pre.target. Aug 13 00:14:41.634668 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:14:41.636626 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:14:41.637527 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:14:41.639439 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:14:41.641532 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:14:41.642502 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:41.643785 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:14:41.644732 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:14:41.646209 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:14:41.648317 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:14:41.649471 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:14:41.651835 systemd-journald[1023]: Time spent on flushing to /var/log/journal/c6fcfdbdba3541558b80364da7e6d337 is 12.367ms for 936 entries. Aug 13 00:14:41.651835 systemd-journald[1023]: System Journal (/var/log/journal/c6fcfdbdba3541558b80364da7e6d337) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:14:41.704321 systemd-journald[1023]: Received client request to flush runtime journal. Aug 13 00:14:41.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.655604 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:14:41.656886 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:14:41.662407 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:14:41.664933 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:14:41.704994 udevadm[1078]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:14:41.669558 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:14:41.685963 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:14:41.686974 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:14:41.702952 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:14:41.705548 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:14:41.707252 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:14:41.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:41.733730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:14:42.068095 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:14:42.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.070430 systemd[1]: Starting systemd-udevd.service... Aug 13 00:14:42.093819 systemd-udevd[1088]: Using default interface naming scheme 'v252'. Aug 13 00:14:42.117885 systemd[1]: Started systemd-udevd.service. Aug 13 00:14:42.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.120723 systemd[1]: Starting systemd-networkd.service... Aug 13 00:14:42.139043 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:14:42.158270 systemd[1]: Found device dev-ttyAMA0.device. Aug 13 00:14:42.181036 systemd[1]: Started systemd-userdbd.service. Aug 13 00:14:42.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.200433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:14:42.237657 systemd-networkd[1097]: lo: Link UP Aug 13 00:14:42.237667 systemd-networkd[1097]: lo: Gained carrier Aug 13 00:14:42.239902 systemd-networkd[1097]: Enumeration completed Aug 13 00:14:42.240023 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:42.240052 systemd[1]: Started systemd-networkd.service. Aug 13 00:14:42.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.242041 systemd-networkd[1097]: eth0: Link UP Aug 13 00:14:42.242054 systemd-networkd[1097]: eth0: Gained carrier Aug 13 00:14:42.245204 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:14:42.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.247488 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:14:42.258829 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:14:42.263909 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:14:42.287782 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:14:42.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.288878 systemd[1]: Reached target cryptsetup.target. Aug 13 00:14:42.291109 systemd[1]: Starting lvm2-activation.service... Aug 13 00:14:42.294884 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:14:42.329838 systemd[1]: Finished lvm2-activation.service. Aug 13 00:14:42.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.330823 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:14:42.331667 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:14:42.331701 systemd[1]: Reached target local-fs.target. Aug 13 00:14:42.332503 systemd[1]: Reached target machines.target. Aug 13 00:14:42.334643 systemd[1]: Starting ldconfig.service... Aug 13 00:14:42.336204 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.336259 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:14:42.337560 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:14:42.339650 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:14:42.342034 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:14:42.344367 systemd[1]: Starting systemd-sysext.service... Aug 13 00:14:42.345724 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1127 (bootctl) Aug 13 00:14:42.346968 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:14:42.351855 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:14:42.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.357601 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:14:42.362974 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:14:42.363401 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:14:42.375764 kernel: loop0: detected capacity change from 0 to 203944 Aug 13 00:14:42.423270 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:14:42.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.426804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:14:42.437614 systemd-fsck[1139]: fsck.fat 4.2 (2021-01-31) Aug 13 00:14:42.437614 systemd-fsck[1139]: /dev/vda1: 236 files, 117307/258078 clusters Aug 13 00:14:42.442398 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:14:42.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.446778 kernel: loop1: detected capacity change from 0 to 203944 Aug 13 00:14:42.452196 (sd-sysext)[1145]: Using extensions 'kubernetes'. Aug 13 00:14:42.453060 (sd-sysext)[1145]: Merged extensions into '/usr'. Aug 13 00:14:42.469724 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.471101 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:14:42.473214 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:14:42.475238 systemd[1]: Starting modprobe@loop.service... Aug 13 00:14:42.476044 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.476189 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:14:42.476943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:42.477108 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:14:42.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.478685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:42.478841 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:14:42.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.480318 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:42.480487 systemd[1]: Finished modprobe@loop.service. Aug 13 00:14:42.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.481943 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:42.482042 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.529363 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:14:42.532618 systemd[1]: Finished ldconfig.service. Aug 13 00:14:42.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.597691 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:14:42.599666 systemd[1]: Mounting boot.mount... Aug 13 00:14:42.601646 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:14:42.608203 systemd[1]: Mounted boot.mount. Aug 13 00:14:42.609206 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:14:42.611220 systemd[1]: Finished systemd-sysext.service. Aug 13 00:14:42.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.613493 systemd[1]: Starting ensure-sysext.service... Aug 13 00:14:42.615527 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:14:42.617199 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:14:42.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.622063 systemd[1]: Reloading. Aug 13 00:14:42.628741 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:14:42.630483 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:14:42.632009 systemd-tmpfiles[1163]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:14:42.658513 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2025-08-13T00:14:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:14:42.658551 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2025-08-13T00:14:42Z" level=info msg="torcx already run" Aug 13 00:14:42.740114 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:14:42.740137 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:14:42.758997 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:14:42.810933 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:14:42.814098 systemd[1]: Starting audit-rules.service... Aug 13 00:14:42.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.816318 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:14:42.818614 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:14:42.821359 systemd[1]: Starting systemd-resolved.service... Aug 13 00:14:42.823693 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:14:42.826510 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:14:42.828214 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:14:42.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.831000 audit[1242]: SYSTEM_BOOT pid=1242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.831617 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:14:42.835357 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.837169 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:14:42.839323 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:14:42.841562 systemd[1]: Starting modprobe@loop.service... Aug 13 00:14:42.842526 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.842725 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:14:42.842889 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:14:42.844079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:42.844264 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:14:42.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.845925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:42.846069 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:14:42.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.847533 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:42.847699 systemd[1]: Finished modprobe@loop.service. Aug 13 00:14:42.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.848982 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:42.849140 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.853961 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:14:42.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.856891 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.858454 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:14:42.860651 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:14:42.863097 systemd[1]: Starting modprobe@loop.service... Aug 13 00:14:42.863897 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.864036 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:14:42.864147 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:14:42.873098 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:14:42.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.874704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:42.874890 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:14:42.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.876150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:42.876302 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:14:42.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.877627 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:42.877815 systemd[1]: Finished modprobe@loop.service. Aug 13 00:14:42.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.879215 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:42.879312 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.881001 systemd[1]: Starting systemd-update-done.service... Aug 13 00:14:42.884734 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.886858 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:14:42.888996 systemd[1]: Starting modprobe@drm.service... Aug 13 00:14:42.891980 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:14:42.895415 systemd[1]: Starting modprobe@loop.service... Aug 13 00:14:42.899104 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.899303 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:14:42.900996 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:14:42.902106 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:14:42.903269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:14:42.903470 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:14:42.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.904918 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:14:42.905071 systemd[1]: Finished modprobe@drm.service. Aug 13 00:14:42.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.906409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:14:42.906553 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:14:42.908112 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:14:42.910847 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:14:42.911014 systemd[1]: Finished modprobe@loop.service. Aug 13 00:14:42.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.912181 systemd[1]: Finished ensure-sysext.service. Aug 13 00:14:42.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.914198 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.917890 systemd[1]: Finished systemd-update-done.service. Aug 13 00:14:42.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:14:42.933000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:14:42.933000 audit[1280]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd4cb2210 a2=420 a3=0 items=0 ppid=1230 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:14:42.933000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:14:42.934585 augenrules[1280]: No rules Aug 13 00:14:42.935719 systemd[1]: Finished audit-rules.service. Aug 13 00:14:42.944909 systemd-resolved[1235]: Positive Trust Anchors: Aug 13 00:14:42.944922 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:14:42.944951 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:14:42.947198 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:14:42.947976 systemd-timesyncd[1236]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:14:42.948041 systemd-timesyncd[1236]: Initial clock synchronization to Wed 2025-08-13 00:14:42.702028 UTC. Aug 13 00:14:42.948529 systemd[1]: Reached target time-set.target. Aug 13 00:14:42.962449 systemd-resolved[1235]: Defaulting to hostname 'linux'. Aug 13 00:14:42.967712 systemd[1]: Started systemd-resolved.service. Aug 13 00:14:42.968684 systemd[1]: Reached target network.target. Aug 13 00:14:42.969564 systemd[1]: Reached target nss-lookup.target. Aug 13 00:14:42.970500 systemd[1]: Reached target sysinit.target. Aug 13 00:14:42.971505 systemd[1]: Started motdgen.path. Aug 13 00:14:42.972306 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:14:42.973653 systemd[1]: Started logrotate.timer. Aug 13 00:14:42.974567 systemd[1]: Started mdadm.timer. Aug 13 00:14:42.975316 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:14:42.976247 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:14:42.976282 systemd[1]: Reached target paths.target. Aug 13 00:14:42.977073 systemd[1]: Reached target timers.target. Aug 13 00:14:42.978197 systemd[1]: Listening on dbus.socket. Aug 13 00:14:42.980174 systemd[1]: Starting docker.socket... Aug 13 00:14:42.981973 systemd[1]: Listening on sshd.socket. Aug 13 00:14:42.982916 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:14:42.983256 systemd[1]: Listening on docker.socket. Aug 13 00:14:42.984103 systemd[1]: Reached target sockets.target. Aug 13 00:14:42.984917 systemd[1]: Reached target basic.target. Aug 13 00:14:42.985868 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:14:42.985922 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.985944 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:14:42.987003 systemd[1]: Starting containerd.service... Aug 13 00:14:42.988864 systemd[1]: Starting dbus.service... Aug 13 00:14:42.990696 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:14:42.992923 systemd[1]: Starting extend-filesystems.service... Aug 13 00:14:42.993876 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:14:42.995363 systemd[1]: Starting motdgen.service... Aug 13 00:14:42.997433 systemd[1]: Starting prepare-helm.service... Aug 13 00:14:43.001095 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:14:43.003247 systemd[1]: Starting sshd-keygen.service... Aug 13 00:14:43.005826 systemd[1]: Starting systemd-logind.service... Aug 13 00:14:43.009632 jq[1292]: false Aug 13 00:14:43.006702 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:14:43.006803 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:14:43.012552 systemd[1]: Starting update-engine.service... Aug 13 00:14:43.014643 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:14:43.017143 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:14:43.017383 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:14:43.018413 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:14:43.018635 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:14:43.020918 extend-filesystems[1293]: Found loop1 Aug 13 00:14:43.024773 jq[1313]: true Aug 13 00:14:43.024390 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:14:43.024622 systemd[1]: Finished motdgen.service. Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda1 Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda2 Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda3 Aug 13 00:14:43.028379 extend-filesystems[1293]: Found usr Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda4 Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda6 Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda7 Aug 13 00:14:43.028379 extend-filesystems[1293]: Found vda9 Aug 13 00:14:43.028379 extend-filesystems[1293]: Checking size of /dev/vda9 Aug 13 00:14:43.079329 extend-filesystems[1293]: Resized partition /dev/vda9 Aug 13 00:14:43.069330 systemd[1]: Started dbus.service. Aug 13 00:14:43.092568 tar[1315]: linux-arm64/helm Aug 13 00:14:43.069042 dbus-daemon[1291]: [system] SELinux support is enabled Aug 13 00:14:43.093240 jq[1318]: true Aug 13 00:14:43.093306 extend-filesystems[1348]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:14:43.076666 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:14:43.076689 systemd[1]: Reached target system-config.target. Aug 13 00:14:43.077756 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:14:43.077774 systemd[1]: Reached target user-config.target. Aug 13 00:14:43.099167 bash[1349]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:14:43.106739 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:14:43.108145 update_engine[1312]: I0813 00:14:43.107692 1312 main.cc:92] Flatcar Update Engine starting Aug 13 00:14:43.110110 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:14:43.112121 systemd[1]: Started update-engine.service. Aug 13 00:14:43.114793 update_engine[1312]: I0813 00:14:43.112131 1312 update_check_scheduler.cc:74] Next update check in 10m33s Aug 13 00:14:43.114655 systemd[1]: Started locksmithd.service. Aug 13 00:14:43.130770 systemd-logind[1303]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:14:43.131411 systemd-logind[1303]: New seat seat0. Aug 13 00:14:43.140811 systemd[1]: Started systemd-logind.service. Aug 13 00:14:43.151765 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:14:43.164459 extend-filesystems[1348]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:14:43.164459 extend-filesystems[1348]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:14:43.164459 extend-filesystems[1348]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:14:43.168536 extend-filesystems[1293]: Resized filesystem in /dev/vda9 Aug 13 00:14:43.169495 env[1319]: time="2025-08-13T00:14:43.164888782Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:14:43.170145 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:14:43.170470 systemd[1]: Finished extend-filesystems.service. Aug 13 00:14:43.182252 env[1319]: time="2025-08-13T00:14:43.182201176Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:14:43.182549 env[1319]: time="2025-08-13T00:14:43.182524329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:43.184356 env[1319]: time="2025-08-13T00:14:43.184310224Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:43.184444 env[1319]: time="2025-08-13T00:14:43.184429715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:43.185014 env[1319]: time="2025-08-13T00:14:43.184987393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:43.185207 env[1319]: time="2025-08-13T00:14:43.185108629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:43.185292 env[1319]: time="2025-08-13T00:14:43.185264836Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:14:43.185436 env[1319]: time="2025-08-13T00:14:43.185418096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:43.185700 env[1319]: time="2025-08-13T00:14:43.185681621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:43.186214 env[1319]: time="2025-08-13T00:14:43.186174745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:14:43.186726 env[1319]: time="2025-08-13T00:14:43.186692916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:14:43.186838 env[1319]: time="2025-08-13T00:14:43.186822177Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:14:43.186952 env[1319]: time="2025-08-13T00:14:43.186933449Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:14:43.187013 env[1319]: time="2025-08-13T00:14:43.186999126Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:14:43.190157 env[1319]: time="2025-08-13T00:14:43.190128619Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:14:43.190277 env[1319]: time="2025-08-13T00:14:43.190260401Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:14:43.190338 env[1319]: time="2025-08-13T00:14:43.190324062Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:14:43.190430 env[1319]: time="2025-08-13T00:14:43.190412886Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.190501 env[1319]: time="2025-08-13T00:14:43.190486860Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.190560 env[1319]: time="2025-08-13T00:14:43.190545831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.190616 env[1319]: time="2025-08-13T00:14:43.190602358Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.191040 env[1319]: time="2025-08-13T00:14:43.191012746Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.191121 env[1319]: time="2025-08-13T00:14:43.191106261Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.191182 env[1319]: time="2025-08-13T00:14:43.191168643Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.191270 env[1319]: time="2025-08-13T00:14:43.191248278Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.191488 env[1319]: time="2025-08-13T00:14:43.191468418Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:14:43.191683 env[1319]: time="2025-08-13T00:14:43.191666342Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:14:43.192003 env[1319]: time="2025-08-13T00:14:43.191977787Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:14:43.192448 env[1319]: time="2025-08-13T00:14:43.192412446Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:14:43.192504 env[1319]: time="2025-08-13T00:14:43.192468120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192504 env[1319]: time="2025-08-13T00:14:43.192484637Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:14:43.192652 env[1319]: time="2025-08-13T00:14:43.192640495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192680 env[1319]: time="2025-08-13T00:14:43.192659221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192680 env[1319]: time="2025-08-13T00:14:43.192673295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192737 env[1319]: time="2025-08-13T00:14:43.192685353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192737 env[1319]: time="2025-08-13T00:14:43.192698573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192737 env[1319]: time="2025-08-13T00:14:43.192711678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192737 env[1319]: time="2025-08-13T00:14:43.192723038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192822 env[1319]: time="2025-08-13T00:14:43.192753938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192822 env[1319]: time="2025-08-13T00:14:43.192770260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:14:43.192928 env[1319]: time="2025-08-13T00:14:43.192907547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192958 env[1319]: time="2025-08-13T00:14:43.192931973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.192958 env[1319]: time="2025-08-13T00:14:43.192945271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.193002 env[1319]: time="2025-08-13T00:14:43.192957406Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:14:43.193031 env[1319]: time="2025-08-13T00:14:43.192971093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:14:43.193111 env[1319]: time="2025-08-13T00:14:43.193094577Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:14:43.193144 env[1319]: time="2025-08-13T00:14:43.193126718Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:14:43.193342 env[1319]: time="2025-08-13T00:14:43.193322820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:14:43.194108 env[1319]: time="2025-08-13T00:14:43.193925705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:14:43.195081 env[1319]: time="2025-08-13T00:14:43.194178683Z" level=info msg="Connect containerd service" Aug 13 00:14:43.195081 env[1319]: time="2025-08-13T00:14:43.194383548Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:14:43.195694 env[1319]: time="2025-08-13T00:14:43.195665190Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:14:43.199076 env[1319]: time="2025-08-13T00:14:43.199032464Z" level=info msg="Start subscribing containerd event" Aug 13 00:14:43.199135 env[1319]: time="2025-08-13T00:14:43.199092985Z" level=info msg="Start recovering state" Aug 13 00:14:43.199189 env[1319]: time="2025-08-13T00:14:43.199174791Z" level=info msg="Start event monitor" Aug 13 00:14:43.199228 env[1319]: time="2025-08-13T00:14:43.199200845Z" level=info msg="Start snapshots syncer" Aug 13 00:14:43.199228 env[1319]: time="2025-08-13T00:14:43.199213174Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:14:43.199228 env[1319]: time="2025-08-13T00:14:43.199222130Z" level=info msg="Start streaming server" Aug 13 00:14:43.199410 env[1319]: time="2025-08-13T00:14:43.199386130Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:14:43.199448 env[1319]: time="2025-08-13T00:14:43.199438044Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:14:43.199498 env[1319]: time="2025-08-13T00:14:43.199487709Z" level=info msg="containerd successfully booted in 0.045755s" Aug 13 00:14:43.199583 systemd[1]: Started containerd.service. Aug 13 00:14:43.222156 locksmithd[1351]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:14:43.460536 tar[1315]: linux-arm64/LICENSE Aug 13 00:14:43.460637 tar[1315]: linux-arm64/README.md Aug 13 00:14:43.464816 systemd[1]: Finished prepare-helm.service. Aug 13 00:14:43.927907 systemd-networkd[1097]: eth0: Gained IPv6LL Aug 13 00:14:43.930223 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:14:43.931659 systemd[1]: Reached target network-online.target. Aug 13 00:14:43.934355 systemd[1]: Starting kubelet.service... Aug 13 00:14:44.637421 systemd[1]: Started kubelet.service. Aug 13 00:14:45.096388 kubelet[1377]: E0813 00:14:45.096292 1377 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:14:45.097832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:14:45.097976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:14:46.672043 sshd_keygen[1324]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:14:46.692363 systemd[1]: Finished sshd-keygen.service. Aug 13 00:14:46.695064 systemd[1]: Starting issuegen.service... Aug 13 00:14:46.700466 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:14:46.700778 systemd[1]: Finished issuegen.service. Aug 13 00:14:46.707151 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:14:46.718589 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:14:46.722664 systemd[1]: Started getty@tty1.service. Aug 13 00:14:46.725311 systemd[1]: Started serial-getty@ttyAMA0.service. Aug 13 00:14:46.726591 systemd[1]: Reached target getty.target. Aug 13 00:14:46.728177 systemd[1]: Reached target multi-user.target. Aug 13 00:14:46.732423 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:14:46.739992 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:14:46.740238 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:14:46.741460 systemd[1]: Startup finished in 5.558s (kernel) + 7.206s (userspace) = 12.764s. Aug 13 00:14:46.763440 systemd[1]: Created slice system-sshd.slice. Aug 13 00:14:46.764690 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:45222.service. Aug 13 00:14:46.815433 sshd[1403]: Accepted publickey for core from 10.0.0.1 port 45222 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:14:46.823018 sshd[1403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:14:46.835142 systemd-logind[1303]: New session 1 of user core. Aug 13 00:14:46.836444 systemd[1]: Created slice user-500.slice. Aug 13 00:14:46.838486 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:14:46.851925 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:14:46.854500 systemd[1]: Starting user@500.service... Aug 13 00:14:46.868668 (systemd)[1408]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:14:46.937153 systemd[1408]: Queued start job for default target default.target. Aug 13 00:14:46.937414 systemd[1408]: Reached target paths.target. Aug 13 00:14:46.937430 systemd[1408]: Reached target sockets.target. Aug 13 00:14:46.937440 systemd[1408]: Reached target timers.target. Aug 13 00:14:46.937450 systemd[1408]: Reached target basic.target. Aug 13 00:14:46.937496 systemd[1408]: Reached target default.target. Aug 13 00:14:46.937517 systemd[1408]: Startup finished in 61ms. Aug 13 00:14:46.937761 systemd[1]: Started user@500.service. Aug 13 00:14:46.938716 systemd[1]: Started session-1.scope. Aug 13 00:14:46.991113 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:45228.service. Aug 13 00:14:47.041867 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 45228 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:14:47.043512 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:14:47.047131 systemd-logind[1303]: New session 2 of user core. Aug 13 00:14:47.048078 systemd[1]: Started session-2.scope. Aug 13 00:14:47.105827 sshd[1417]: pam_unix(sshd:session): session closed for user core Aug 13 00:14:47.109837 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:45232.service. Aug 13 00:14:47.112364 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:45228.service: Deactivated successfully. Aug 13 00:14:47.113891 systemd-logind[1303]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:14:47.113965 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:14:47.114700 systemd-logind[1303]: Removed session 2. Aug 13 00:14:47.158344 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 45232 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:14:47.160186 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:14:47.163908 systemd-logind[1303]: New session 3 of user core. Aug 13 00:14:47.164815 systemd[1]: Started session-3.scope. Aug 13 00:14:47.217136 sshd[1422]: pam_unix(sshd:session): session closed for user core Aug 13 00:14:47.220593 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:45246.service. Aug 13 00:14:47.223377 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:45232.service: Deactivated successfully. Aug 13 00:14:47.225094 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:14:47.225098 systemd-logind[1303]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:14:47.226047 systemd-logind[1303]: Removed session 3. Aug 13 00:14:47.266492 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 45246 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:14:47.268170 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:14:47.272167 systemd-logind[1303]: New session 4 of user core. Aug 13 00:14:47.272775 systemd[1]: Started session-4.scope. Aug 13 00:14:47.329961 sshd[1429]: pam_unix(sshd:session): session closed for user core Aug 13 00:14:47.332629 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:45262.service. Aug 13 00:14:47.334661 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:45246.service: Deactivated successfully. Aug 13 00:14:47.336124 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:14:47.336137 systemd-logind[1303]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:14:47.337410 systemd-logind[1303]: Removed session 4. Aug 13 00:14:47.375830 sshd[1436]: Accepted publickey for core from 10.0.0.1 port 45262 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:14:47.377384 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:14:47.380793 systemd-logind[1303]: New session 5 of user core. Aug 13 00:14:47.381624 systemd[1]: Started session-5.scope. Aug 13 00:14:47.462955 sudo[1442]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:14:47.463176 sudo[1442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:14:47.572034 systemd[1]: Starting docker.service... Aug 13 00:14:47.669699 env[1454]: time="2025-08-13T00:14:47.669638191Z" level=info msg="Starting up" Aug 13 00:14:47.671410 env[1454]: time="2025-08-13T00:14:47.671369431Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:14:47.671410 env[1454]: time="2025-08-13T00:14:47.671399597Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:14:47.671493 env[1454]: time="2025-08-13T00:14:47.671423008Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:14:47.671493 env[1454]: time="2025-08-13T00:14:47.671434360Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:14:47.673553 env[1454]: time="2025-08-13T00:14:47.673528305Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:14:47.673649 env[1454]: time="2025-08-13T00:14:47.673635420Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:14:47.673712 env[1454]: time="2025-08-13T00:14:47.673697442Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:14:47.673833 env[1454]: time="2025-08-13T00:14:47.673812767Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:14:47.678809 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport494153673-merged.mount: Deactivated successfully. Aug 13 00:14:47.860204 env[1454]: time="2025-08-13T00:14:47.860109277Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:14:47.860503 env[1454]: time="2025-08-13T00:14:47.860484552Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:14:47.860717 env[1454]: time="2025-08-13T00:14:47.860703103Z" level=info msg="Loading containers: start." Aug 13 00:14:47.992764 kernel: Initializing XFRM netlink socket Aug 13 00:14:48.015978 env[1454]: time="2025-08-13T00:14:48.015930514Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:14:48.068701 systemd-networkd[1097]: docker0: Link UP Aug 13 00:14:48.144260 env[1454]: time="2025-08-13T00:14:48.144167334Z" level=info msg="Loading containers: done." Aug 13 00:14:48.165556 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3670657462-merged.mount: Deactivated successfully. Aug 13 00:14:48.176738 env[1454]: time="2025-08-13T00:14:48.176686011Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:14:48.177098 env[1454]: time="2025-08-13T00:14:48.177076438Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:14:48.177268 env[1454]: time="2025-08-13T00:14:48.177250569Z" level=info msg="Daemon has completed initialization" Aug 13 00:14:48.198959 systemd[1]: Started docker.service. Aug 13 00:14:48.205831 env[1454]: time="2025-08-13T00:14:48.205691827Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:14:48.810392 env[1319]: time="2025-08-13T00:14:48.810332115Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:14:49.468637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626457969.mount: Deactivated successfully. Aug 13 00:14:50.669133 env[1319]: time="2025-08-13T00:14:50.669088063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:50.670435 env[1319]: time="2025-08-13T00:14:50.670405054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:50.672330 env[1319]: time="2025-08-13T00:14:50.672292596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:50.674969 env[1319]: time="2025-08-13T00:14:50.674935312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:50.675714 env[1319]: time="2025-08-13T00:14:50.675685192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:14:50.679347 env[1319]: time="2025-08-13T00:14:50.679318448Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:14:51.958888 env[1319]: time="2025-08-13T00:14:51.958808264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:51.960358 env[1319]: time="2025-08-13T00:14:51.960324086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:51.962400 env[1319]: time="2025-08-13T00:14:51.962362530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:51.964060 env[1319]: time="2025-08-13T00:14:51.964023800Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:51.964902 env[1319]: time="2025-08-13T00:14:51.964873492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:14:51.965601 env[1319]: time="2025-08-13T00:14:51.965576271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:14:53.148849 env[1319]: time="2025-08-13T00:14:53.148800778Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:53.150251 env[1319]: time="2025-08-13T00:14:53.150217629Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:53.152333 env[1319]: time="2025-08-13T00:14:53.152307158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:53.153911 env[1319]: time="2025-08-13T00:14:53.153885691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:53.154694 env[1319]: time="2025-08-13T00:14:53.154669899Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:14:53.155336 env[1319]: time="2025-08-13T00:14:53.155313573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:14:54.170855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065955651.mount: Deactivated successfully. Aug 13 00:14:54.739720 env[1319]: time="2025-08-13T00:14:54.739659844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:54.740983 env[1319]: time="2025-08-13T00:14:54.740961411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:54.742344 env[1319]: time="2025-08-13T00:14:54.742307381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:54.743667 env[1319]: time="2025-08-13T00:14:54.743643224Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:54.744076 env[1319]: time="2025-08-13T00:14:54.744048337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:14:54.744558 env[1319]: time="2025-08-13T00:14:54.744533481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:14:55.348832 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:14:55.349031 systemd[1]: Stopped kubelet.service. Aug 13 00:14:55.351160 systemd[1]: Starting kubelet.service... Aug 13 00:14:55.363964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884534390.mount: Deactivated successfully. Aug 13 00:14:55.456806 systemd[1]: Started kubelet.service. Aug 13 00:14:55.511056 kubelet[1593]: E0813 00:14:55.511004 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:14:55.514088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:14:55.514240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:14:56.348632 env[1319]: time="2025-08-13T00:14:56.348570261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.350954 env[1319]: time="2025-08-13T00:14:56.350908567Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.358503 env[1319]: time="2025-08-13T00:14:56.358461794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.360921 env[1319]: time="2025-08-13T00:14:56.360880542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.363414 env[1319]: time="2025-08-13T00:14:56.363351485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:14:56.363962 env[1319]: time="2025-08-13T00:14:56.363927548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:14:56.931018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895927320.mount: Deactivated successfully. Aug 13 00:14:56.935447 env[1319]: time="2025-08-13T00:14:56.935385725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.937733 env[1319]: time="2025-08-13T00:14:56.937676092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.939559 env[1319]: time="2025-08-13T00:14:56.939518101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.941599 env[1319]: time="2025-08-13T00:14:56.941557674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:56.942114 env[1319]: time="2025-08-13T00:14:56.942079233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:14:56.942496 env[1319]: time="2025-08-13T00:14:56.942472651Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:14:57.449311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266055392.mount: Deactivated successfully. Aug 13 00:14:59.387898 env[1319]: time="2025-08-13T00:14:59.387852751Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:59.390189 env[1319]: time="2025-08-13T00:14:59.390142891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:59.392420 env[1319]: time="2025-08-13T00:14:59.392378191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:59.394770 env[1319]: time="2025-08-13T00:14:59.394734649Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:14:59.398827 env[1319]: time="2025-08-13T00:14:59.398683947Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:15:04.915177 systemd[1]: Stopped kubelet.service. Aug 13 00:15:04.917317 systemd[1]: Starting kubelet.service... Aug 13 00:15:04.941610 systemd[1]: Reloading. Aug 13 00:15:04.988729 /usr/lib/systemd/system-generators/torcx-generator[1651]: time="2025-08-13T00:15:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:15:04.988772 /usr/lib/systemd/system-generators/torcx-generator[1651]: time="2025-08-13T00:15:04Z" level=info msg="torcx already run" Aug 13 00:15:05.133689 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:15:05.133892 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:15:05.152382 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:05.213980 systemd[1]: Started kubelet.service. Aug 13 00:15:05.217661 systemd[1]: Stopping kubelet.service... Aug 13 00:15:05.218707 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:15:05.219148 systemd[1]: Stopped kubelet.service. Aug 13 00:15:05.221181 systemd[1]: Starting kubelet.service... Aug 13 00:15:05.315912 systemd[1]: Started kubelet.service. Aug 13 00:15:05.370211 kubelet[1716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:05.370211 kubelet[1716]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:15:05.370211 kubelet[1716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:05.370619 kubelet[1716]: I0813 00:15:05.370284 1716 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:15:05.827219 kubelet[1716]: I0813 00:15:05.827175 1716 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:15:05.827219 kubelet[1716]: I0813 00:15:05.827209 1716 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:15:05.827477 kubelet[1716]: I0813 00:15:05.827451 1716 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:15:05.873406 kubelet[1716]: I0813 00:15:05.873361 1716 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:15:05.875603 kubelet[1716]: E0813 00:15:05.875568 1716 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:05.886460 kubelet[1716]: E0813 00:15:05.886426 1716 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:15:05.886460 kubelet[1716]: I0813 00:15:05.886459 1716 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:15:05.890098 kubelet[1716]: I0813 00:15:05.890075 1716 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:15:05.890838 kubelet[1716]: I0813 00:15:05.890824 1716 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:15:05.890969 kubelet[1716]: I0813 00:15:05.890943 1716 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:15:05.891151 kubelet[1716]: I0813 00:15:05.890969 1716 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:15:05.891278 kubelet[1716]: I0813 00:15:05.891268 1716 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:15:05.891311 kubelet[1716]: I0813 00:15:05.891279 1716 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:15:05.891520 kubelet[1716]: I0813 00:15:05.891499 1716 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:05.898257 kubelet[1716]: I0813 00:15:05.898228 1716 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:15:05.898325 kubelet[1716]: I0813 00:15:05.898265 1716 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:15:05.898325 kubelet[1716]: I0813 00:15:05.898288 1716 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:15:05.898431 kubelet[1716]: I0813 00:15:05.898417 1716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:15:05.927392 kubelet[1716]: W0813 00:15:05.927326 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:05.927550 kubelet[1716]: E0813 00:15:05.927441 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:05.928512 kubelet[1716]: W0813 00:15:05.928448 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:05.928691 kubelet[1716]: E0813 00:15:05.928652 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:05.929067 kubelet[1716]: I0813 00:15:05.929049 1716 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:15:05.930094 kubelet[1716]: I0813 00:15:05.930071 1716 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:15:05.930155 kubelet[1716]: W0813 00:15:05.930124 1716 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:15:05.931659 kubelet[1716]: I0813 00:15:05.931639 1716 server.go:1274] "Started kubelet" Aug 13 00:15:05.932243 kubelet[1716]: I0813 00:15:05.932121 1716 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:15:05.933020 kubelet[1716]: I0813 00:15:05.932624 1716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:15:05.933020 kubelet[1716]: I0813 00:15:05.932978 1716 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:15:05.933437 kubelet[1716]: I0813 00:15:05.933409 1716 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:15:05.935680 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:15:05.935968 kubelet[1716]: I0813 00:15:05.935945 1716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:15:05.937197 kubelet[1716]: I0813 00:15:05.937168 1716 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:15:05.937922 kubelet[1716]: E0813 00:15:05.937897 1716 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:15:05.938695 kubelet[1716]: I0813 00:15:05.938514 1716 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:15:05.938695 kubelet[1716]: I0813 00:15:05.938627 1716 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:15:05.938695 kubelet[1716]: I0813 00:15:05.938668 1716 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:15:05.939063 kubelet[1716]: W0813 00:15:05.939017 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:05.939125 kubelet[1716]: E0813 00:15:05.939070 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:05.939125 kubelet[1716]: E0813 00:15:05.939087 1716 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:15:05.939220 kubelet[1716]: E0813 00:15:05.939193 1716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" Aug 13 00:15:05.939220 kubelet[1716]: I0813 00:15:05.939205 1716 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:15:05.939311 kubelet[1716]: I0813 00:15:05.939293 1716 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:15:05.940488 kubelet[1716]: I0813 00:15:05.940467 1716 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:15:05.948625 kubelet[1716]: E0813 00:15:05.943483 1716 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.125:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.125:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2b50d3327870 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:15:05.931618416 +0000 UTC m=+0.606271260,LastTimestamp:2025-08-13 00:15:05.931618416 +0000 UTC m=+0.606271260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:15:05.963854 kubelet[1716]: I0813 00:15:05.963792 1716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:15:05.965150 kubelet[1716]: I0813 00:15:05.965123 1716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:15:05.965150 kubelet[1716]: I0813 00:15:05.965151 1716 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:15:05.965294 kubelet[1716]: I0813 00:15:05.965282 1716 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:15:05.965343 kubelet[1716]: E0813 00:15:05.965326 1716 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:15:05.965927 kubelet[1716]: W0813 00:15:05.965901 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:05.965979 kubelet[1716]: E0813 00:15:05.965946 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:05.967150 kubelet[1716]: I0813 00:15:05.967129 1716 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:15:05.967150 kubelet[1716]: I0813 00:15:05.967146 1716 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:15:05.967300 kubelet[1716]: I0813 00:15:05.967286 1716 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:06.040005 kubelet[1716]: E0813 00:15:06.039949 1716 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:15:06.044261 kubelet[1716]: I0813 00:15:06.044236 1716 policy_none.go:49] "None policy: Start" Aug 13 00:15:06.045102 kubelet[1716]: I0813 00:15:06.045078 1716 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:15:06.045171 kubelet[1716]: I0813 00:15:06.045120 1716 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:15:06.052777 kubelet[1716]: I0813 00:15:06.052734 1716 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:15:06.052938 kubelet[1716]: I0813 00:15:06.052922 1716 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:15:06.052995 kubelet[1716]: I0813 00:15:06.052942 1716 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:15:06.053781 kubelet[1716]: I0813 00:15:06.053765 1716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:15:06.054771 kubelet[1716]: E0813 00:15:06.054734 1716 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:15:06.141706 kubelet[1716]: E0813 00:15:06.140332 1716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" Aug 13 00:15:06.154408 kubelet[1716]: I0813 00:15:06.154365 1716 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:15:06.154997 kubelet[1716]: E0813 00:15:06.154959 1716 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Aug 13 00:15:06.239237 kubelet[1716]: I0813 00:15:06.239188 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:06.239237 kubelet[1716]: I0813 00:15:06.239230 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:06.239387 kubelet[1716]: I0813 00:15:06.239249 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:06.239387 kubelet[1716]: I0813 00:15:06.239269 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:15:06.239387 kubelet[1716]: I0813 00:15:06.239287 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40d184951440dc7c61d69060fef556d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40d184951440dc7c61d69060fef556d0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:06.239387 kubelet[1716]: I0813 00:15:06.239304 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40d184951440dc7c61d69060fef556d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40d184951440dc7c61d69060fef556d0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:06.239387 kubelet[1716]: I0813 00:15:06.239319 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40d184951440dc7c61d69060fef556d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40d184951440dc7c61d69060fef556d0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:06.239507 kubelet[1716]: I0813 00:15:06.239334 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:06.239507 kubelet[1716]: I0813 00:15:06.239350 1716 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:06.356642 kubelet[1716]: I0813 00:15:06.356619 1716 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:15:06.357142 kubelet[1716]: E0813 00:15:06.357098 1716 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Aug 13 00:15:06.373490 kubelet[1716]: E0813 00:15:06.373463 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:06.377139 kubelet[1716]: E0813 00:15:06.374595 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:06.377139 kubelet[1716]: E0813 00:15:06.377090 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:06.377905 env[1319]: time="2025-08-13T00:15:06.377835928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:06.378174 env[1319]: time="2025-08-13T00:15:06.378114051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:06.378344 env[1319]: time="2025-08-13T00:15:06.378300585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40d184951440dc7c61d69060fef556d0,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:06.541566 kubelet[1716]: E0813 00:15:06.540995 1716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" Aug 13 00:15:06.759375 kubelet[1716]: I0813 00:15:06.759334 1716 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:15:06.759670 kubelet[1716]: E0813 00:15:06.759641 1716 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Aug 13 00:15:07.024798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822640559.mount: Deactivated successfully. Aug 13 00:15:07.031108 env[1319]: time="2025-08-13T00:15:07.031055621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.034097 env[1319]: time="2025-08-13T00:15:07.034058472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.035052 env[1319]: time="2025-08-13T00:15:07.035023827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.035800 env[1319]: time="2025-08-13T00:15:07.035771294Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.037933 env[1319]: time="2025-08-13T00:15:07.037851178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.039611 env[1319]: time="2025-08-13T00:15:07.039581618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.041962 env[1319]: time="2025-08-13T00:15:07.041926410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.045502 env[1319]: time="2025-08-13T00:15:07.045469468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.047049 env[1319]: time="2025-08-13T00:15:07.047020451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.048357 env[1319]: time="2025-08-13T00:15:07.048320988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.049869 env[1319]: time="2025-08-13T00:15:07.049840012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.050807 env[1319]: time="2025-08-13T00:15:07.050734535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:07.062077 kubelet[1716]: W0813 00:15:07.061989 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:07.062077 kubelet[1716]: E0813 00:15:07.062058 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:07.083684 env[1319]: time="2025-08-13T00:15:07.083613930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:07.083684 env[1319]: time="2025-08-13T00:15:07.083658075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:07.083902 env[1319]: time="2025-08-13T00:15:07.083865895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:07.084241 env[1319]: time="2025-08-13T00:15:07.084057336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:07.084241 env[1319]: time="2025-08-13T00:15:07.084134880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:07.084241 env[1319]: time="2025-08-13T00:15:07.084170835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:07.084484 env[1319]: time="2025-08-13T00:15:07.084414291Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdf6dd12d2c15bb1721aa5150f4a371813c3eaf212a37d2fde8c64b057f592c8 pid=1759 runtime=io.containerd.runc.v2 Aug 13 00:15:07.084614 env[1319]: time="2025-08-13T00:15:07.084543849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/276732eefce151f1295116a3ce40d3f15459d3f734f2c88992d5a473b9038363 pid=1767 runtime=io.containerd.runc.v2 Aug 13 00:15:07.088295 env[1319]: time="2025-08-13T00:15:07.088201323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:07.088295 env[1319]: time="2025-08-13T00:15:07.088247665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:07.088493 env[1319]: time="2025-08-13T00:15:07.088454327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:07.088805 env[1319]: time="2025-08-13T00:15:07.088736016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/799284cd2664a37c78c1f69f968fb6df5864d19321248f81c0d16d59c507313d pid=1793 runtime=io.containerd.runc.v2 Aug 13 00:15:07.136111 kubelet[1716]: W0813 00:15:07.136035 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:07.136111 kubelet[1716]: E0813 00:15:07.136102 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:07.178082 env[1319]: time="2025-08-13T00:15:07.177468447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"799284cd2664a37c78c1f69f968fb6df5864d19321248f81c0d16d59c507313d\"" Aug 13 00:15:07.178490 kubelet[1716]: E0813 00:15:07.178470 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:07.179422 env[1319]: time="2025-08-13T00:15:07.179360006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40d184951440dc7c61d69060fef556d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf6dd12d2c15bb1721aa5150f4a371813c3eaf212a37d2fde8c64b057f592c8\"" Aug 13 00:15:07.180020 kubelet[1716]: E0813 00:15:07.179871 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:07.180994 env[1319]: time="2025-08-13T00:15:07.180959449Z" level=info msg="CreateContainer within sandbox \"799284cd2664a37c78c1f69f968fb6df5864d19321248f81c0d16d59c507313d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:15:07.181980 kubelet[1716]: W0813 00:15:07.181888 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:07.181980 kubelet[1716]: E0813 00:15:07.181949 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:07.182185 env[1319]: time="2025-08-13T00:15:07.182148684Z" level=info msg="CreateContainer within sandbox \"bdf6dd12d2c15bb1721aa5150f4a371813c3eaf212a37d2fde8c64b057f592c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:15:07.184160 env[1319]: time="2025-08-13T00:15:07.184109916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"276732eefce151f1295116a3ce40d3f15459d3f734f2c88992d5a473b9038363\"" Aug 13 00:15:07.184979 kubelet[1716]: E0813 00:15:07.184827 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:07.186738 env[1319]: time="2025-08-13T00:15:07.186697366Z" level=info msg="CreateContainer within sandbox \"276732eefce151f1295116a3ce40d3f15459d3f734f2c88992d5a473b9038363\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:15:07.202559 env[1319]: time="2025-08-13T00:15:07.202498760Z" level=info msg="CreateContainer within sandbox \"bdf6dd12d2c15bb1721aa5150f4a371813c3eaf212a37d2fde8c64b057f592c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b71777dc09b474b38163f48ce18717c4c92f06a18d40548672307ea18edf18b2\"" Aug 13 00:15:07.203319 env[1319]: time="2025-08-13T00:15:07.203275511Z" level=info msg="StartContainer for \"b71777dc09b474b38163f48ce18717c4c92f06a18d40548672307ea18edf18b2\"" Aug 13 00:15:07.205068 env[1319]: time="2025-08-13T00:15:07.205016897Z" level=info msg="CreateContainer within sandbox \"799284cd2664a37c78c1f69f968fb6df5864d19321248f81c0d16d59c507313d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f9ab9517b8985d4f4383cb45fca66d8b1768b1aa34d8afce83c9030eaef0eff\"" Aug 13 00:15:07.205587 env[1319]: time="2025-08-13T00:15:07.205560698Z" level=info msg="StartContainer for \"1f9ab9517b8985d4f4383cb45fca66d8b1768b1aa34d8afce83c9030eaef0eff\"" Aug 13 00:15:07.209364 env[1319]: time="2025-08-13T00:15:07.209319526Z" level=info msg="CreateContainer within sandbox \"276732eefce151f1295116a3ce40d3f15459d3f734f2c88992d5a473b9038363\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd9ed43cfe2c128eb5fc05ba32a1eebf97792840120cb91ed0a9f4b779f5cf5f\"" Aug 13 00:15:07.209931 env[1319]: time="2025-08-13T00:15:07.209898003Z" level=info msg="StartContainer for \"dd9ed43cfe2c128eb5fc05ba32a1eebf97792840120cb91ed0a9f4b779f5cf5f\"" Aug 13 00:15:07.291307 env[1319]: time="2025-08-13T00:15:07.291203027Z" level=info msg="StartContainer for \"dd9ed43cfe2c128eb5fc05ba32a1eebf97792840120cb91ed0a9f4b779f5cf5f\" returns successfully" Aug 13 00:15:07.306762 env[1319]: time="2025-08-13T00:15:07.304971159Z" level=info msg="StartContainer for \"1f9ab9517b8985d4f4383cb45fca66d8b1768b1aa34d8afce83c9030eaef0eff\" returns successfully" Aug 13 00:15:07.337323 kubelet[1716]: W0813 00:15:07.334943 1716 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Aug 13 00:15:07.337323 kubelet[1716]: E0813 00:15:07.335021 1716 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:07.342436 kubelet[1716]: E0813 00:15:07.342389 1716 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="1.6s" Aug 13 00:15:07.351441 env[1319]: time="2025-08-13T00:15:07.351357013Z" level=info msg="StartContainer for \"b71777dc09b474b38163f48ce18717c4c92f06a18d40548672307ea18edf18b2\" returns successfully" Aug 13 00:15:07.561279 kubelet[1716]: I0813 00:15:07.561179 1716 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:15:07.561574 kubelet[1716]: E0813 00:15:07.561512 1716 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Aug 13 00:15:07.971851 kubelet[1716]: E0813 00:15:07.971736 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:07.973991 kubelet[1716]: E0813 00:15:07.973965 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:07.975953 kubelet[1716]: E0813 00:15:07.975931 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:08.977428 kubelet[1716]: E0813 00:15:08.977397 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:09.163679 kubelet[1716]: I0813 00:15:09.163635 1716 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:15:09.596487 kubelet[1716]: E0813 00:15:09.596444 1716 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:15:09.661359 kubelet[1716]: I0813 00:15:09.661313 1716 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:15:09.661523 kubelet[1716]: E0813 00:15:09.661376 1716 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 00:15:09.902512 kubelet[1716]: I0813 00:15:09.902415 1716 apiserver.go:52] "Watching apiserver" Aug 13 00:15:09.939647 kubelet[1716]: I0813 00:15:09.939596 1716 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:15:10.148186 kubelet[1716]: E0813 00:15:10.148124 1716 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:15:10.148507 kubelet[1716]: E0813 00:15:10.148305 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:10.761329 kubelet[1716]: E0813 00:15:10.761299 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:10.978981 kubelet[1716]: E0813 00:15:10.978949 1716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:11.801175 systemd[1]: Reloading. Aug 13 00:15:11.849716 /usr/lib/systemd/system-generators/torcx-generator[2010]: time="2025-08-13T00:15:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:15:11.849759 /usr/lib/systemd/system-generators/torcx-generator[2010]: time="2025-08-13T00:15:11Z" level=info msg="torcx already run" Aug 13 00:15:11.919199 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:15:11.919215 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:15:11.938490 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:12.013232 systemd[1]: Stopping kubelet.service... Aug 13 00:15:12.036141 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:15:12.036478 systemd[1]: Stopped kubelet.service. Aug 13 00:15:12.038810 systemd[1]: Starting kubelet.service... Aug 13 00:15:12.135212 systemd[1]: Started kubelet.service. Aug 13 00:15:12.177516 kubelet[2063]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:12.177904 kubelet[2063]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:15:12.177955 kubelet[2063]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:12.178153 kubelet[2063]: I0813 00:15:12.178122 2063 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:15:12.184901 kubelet[2063]: I0813 00:15:12.184863 2063 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:15:12.184901 kubelet[2063]: I0813 00:15:12.184908 2063 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:15:12.185211 kubelet[2063]: I0813 00:15:12.185183 2063 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:15:12.187055 kubelet[2063]: I0813 00:15:12.187030 2063 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:15:12.189344 kubelet[2063]: I0813 00:15:12.189317 2063 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:15:12.193698 kubelet[2063]: E0813 00:15:12.193663 2063 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:15:12.193698 kubelet[2063]: I0813 00:15:12.193701 2063 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:15:12.197264 kubelet[2063]: I0813 00:15:12.197237 2063 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:15:12.197802 kubelet[2063]: I0813 00:15:12.197778 2063 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:15:12.197914 kubelet[2063]: I0813 00:15:12.197881 2063 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:15:12.198335 kubelet[2063]: I0813 00:15:12.197909 2063 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:15:12.198417 kubelet[2063]: I0813 00:15:12.198348 2063 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:15:12.198417 kubelet[2063]: I0813 00:15:12.198360 2063 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:15:12.198417 kubelet[2063]: I0813 00:15:12.198398 2063 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:12.198651 kubelet[2063]: I0813 00:15:12.198618 2063 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:15:12.198684 kubelet[2063]: I0813 00:15:12.198656 2063 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:15:12.198684 kubelet[2063]: I0813 00:15:12.198677 2063 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:15:12.198727 kubelet[2063]: I0813 00:15:12.198693 2063 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:15:12.199502 kubelet[2063]: I0813 00:15:12.199484 2063 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:15:12.199981 kubelet[2063]: I0813 00:15:12.199962 2063 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:15:12.200400 kubelet[2063]: I0813 00:15:12.200383 2063 server.go:1274] "Started kubelet" Aug 13 00:15:12.202543 kubelet[2063]: I0813 00:15:12.202521 2063 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:15:12.208844 kubelet[2063]: I0813 00:15:12.208795 2063 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:15:12.209641 kubelet[2063]: I0813 00:15:12.209605 2063 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:15:12.209920 kubelet[2063]: E0813 00:15:12.209889 2063 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:15:12.210112 kubelet[2063]: I0813 00:15:12.210096 2063 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:15:12.211317 kubelet[2063]: I0813 00:15:12.211254 2063 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:15:12.211605 kubelet[2063]: I0813 00:15:12.211590 2063 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:15:12.218754 kubelet[2063]: I0813 00:15:12.212893 2063 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:15:12.218754 kubelet[2063]: I0813 00:15:12.213023 2063 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:15:12.218754 kubelet[2063]: I0813 00:15:12.217660 2063 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:15:12.219562 kubelet[2063]: I0813 00:15:12.219542 2063 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:15:12.219786 kubelet[2063]: I0813 00:15:12.219592 2063 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:15:12.219860 kubelet[2063]: I0813 00:15:12.219849 2063 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:15:12.219920 kubelet[2063]: I0813 00:15:12.219910 2063 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:15:12.220177 kubelet[2063]: E0813 00:15:12.220156 2063 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:15:12.229633 kubelet[2063]: I0813 00:15:12.227661 2063 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:15:12.229633 kubelet[2063]: I0813 00:15:12.227819 2063 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:15:12.231055 kubelet[2063]: I0813 00:15:12.230094 2063 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:15:12.233335 kubelet[2063]: E0813 00:15:12.232874 2063 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:15:12.276339 kubelet[2063]: I0813 00:15:12.276310 2063 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:15:12.276339 kubelet[2063]: I0813 00:15:12.276331 2063 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:15:12.276483 kubelet[2063]: I0813 00:15:12.276354 2063 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:12.276526 kubelet[2063]: I0813 00:15:12.276510 2063 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:15:12.276554 kubelet[2063]: I0813 00:15:12.276526 2063 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:15:12.276554 kubelet[2063]: I0813 00:15:12.276547 2063 policy_none.go:49] "None policy: Start" Aug 13 00:15:12.277197 kubelet[2063]: I0813 00:15:12.277180 2063 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:15:12.277265 kubelet[2063]: I0813 00:15:12.277203 2063 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:15:12.277372 kubelet[2063]: I0813 00:15:12.277358 2063 state_mem.go:75] "Updated machine memory state" Aug 13 00:15:12.278539 kubelet[2063]: I0813 00:15:12.278511 2063 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:15:12.278696 kubelet[2063]: I0813 00:15:12.278675 2063 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:15:12.278735 kubelet[2063]: I0813 00:15:12.278692 2063 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:15:12.278954 kubelet[2063]: I0813 00:15:12.278936 2063 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:15:12.327727 kubelet[2063]: E0813 00:15:12.327684 2063 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:12.382222 kubelet[2063]: I0813 00:15:12.382194 2063 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:15:12.393161 kubelet[2063]: I0813 00:15:12.392943 2063 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:15:12.393551 kubelet[2063]: I0813 00:15:12.393536 2063 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:15:12.414355 kubelet[2063]: I0813 00:15:12.414292 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40d184951440dc7c61d69060fef556d0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40d184951440dc7c61d69060fef556d0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:12.414510 kubelet[2063]: I0813 00:15:12.414375 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40d184951440dc7c61d69060fef556d0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40d184951440dc7c61d69060fef556d0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:12.414510 kubelet[2063]: I0813 00:15:12.414418 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:12.414510 kubelet[2063]: I0813 00:15:12.414439 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:15:12.414510 kubelet[2063]: I0813 00:15:12.414457 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40d184951440dc7c61d69060fef556d0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40d184951440dc7c61d69060fef556d0\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:12.414510 kubelet[2063]: I0813 00:15:12.414492 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:12.414619 kubelet[2063]: I0813 00:15:12.414509 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:12.414619 kubelet[2063]: I0813 00:15:12.414544 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:12.414619 kubelet[2063]: I0813 00:15:12.414581 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:15:12.627436 kubelet[2063]: E0813 00:15:12.627406 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:12.627564 kubelet[2063]: E0813 00:15:12.627411 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:12.628638 kubelet[2063]: E0813 00:15:12.628618 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:12.807001 sudo[2097]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:15:12.807238 sudo[2097]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:15:13.205143 kubelet[2063]: I0813 00:15:13.205095 2063 apiserver.go:52] "Watching apiserver" Aug 13 00:15:13.213522 kubelet[2063]: I0813 00:15:13.213476 2063 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:15:13.244024 kubelet[2063]: E0813 00:15:13.243993 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:13.244223 kubelet[2063]: E0813 00:15:13.244192 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:13.244295 sudo[2097]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:13.253724 kubelet[2063]: E0813 00:15:13.253675 2063 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:15:13.254106 kubelet[2063]: E0813 00:15:13.254090 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:13.265844 kubelet[2063]: I0813 00:15:13.265283 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.265268732 podStartE2EDuration="1.265268732s" podCreationTimestamp="2025-08-13 00:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:15:13.265009837 +0000 UTC m=+1.123303869" watchObservedRunningTime="2025-08-13 00:15:13.265268732 +0000 UTC m=+1.123562724" Aug 13 00:15:13.281530 kubelet[2063]: I0813 00:15:13.281478 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.281460107 podStartE2EDuration="3.281460107s" podCreationTimestamp="2025-08-13 00:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:15:13.273485012 +0000 UTC m=+1.131779044" watchObservedRunningTime="2025-08-13 00:15:13.281460107 +0000 UTC m=+1.139754139" Aug 13 00:15:13.281703 kubelet[2063]: I0813 00:15:13.281558 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2815542149999999 podStartE2EDuration="1.281554215s" podCreationTimestamp="2025-08-13 00:15:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:15:13.281197215 +0000 UTC m=+1.139491246" watchObservedRunningTime="2025-08-13 00:15:13.281554215 +0000 UTC m=+1.139848247" Aug 13 00:15:14.245470 kubelet[2063]: E0813 00:15:14.245429 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:14.246409 kubelet[2063]: E0813 00:15:14.246289 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:15.247234 kubelet[2063]: E0813 00:15:15.247188 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:15.462731 sudo[1442]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:15.464200 sshd[1436]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:15.466819 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:45262.service: Deactivated successfully. Aug 13 00:15:15.467955 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:15:15.468351 systemd-logind[1303]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:15:15.469519 systemd-logind[1303]: Removed session 5. Aug 13 00:15:18.442724 kubelet[2063]: I0813 00:15:18.442677 2063 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:15:18.446275 env[1319]: time="2025-08-13T00:15:18.444845921Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:15:18.446582 kubelet[2063]: I0813 00:15:18.445044 2063 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:15:19.472250 kubelet[2063]: I0813 00:15:19.472159 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b6deb39-0943-4568-8990-2e5000ddae4a-xtables-lock\") pod \"kube-proxy-9zvz6\" (UID: \"9b6deb39-0943-4568-8990-2e5000ddae4a\") " pod="kube-system/kube-proxy-9zvz6" Aug 13 00:15:19.472250 kubelet[2063]: I0813 00:15:19.472204 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-clustermesh-secrets\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.472250 kubelet[2063]: I0813 00:15:19.472226 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-kernel\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.472250 kubelet[2063]: I0813 00:15:19.472243 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhvk\" (UniqueName: \"kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-kube-api-access-xzhvk\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.472250 kubelet[2063]: I0813 00:15:19.472261 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cni-path\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.472883 kubelet[2063]: I0813 00:15:19.472277 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-config-path\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.472883 kubelet[2063]: I0813 00:15:19.472294 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b6deb39-0943-4568-8990-2e5000ddae4a-kube-proxy\") pod \"kube-proxy-9zvz6\" (UID: \"9b6deb39-0943-4568-8990-2e5000ddae4a\") " pod="kube-system/kube-proxy-9zvz6" Aug 13 00:15:19.472883 kubelet[2063]: I0813 00:15:19.472310 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-etc-cni-netd\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.472883 kubelet[2063]: I0813 00:15:19.472328 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-lib-modules\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.472883 kubelet[2063]: I0813 00:15:19.472345 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b6deb39-0943-4568-8990-2e5000ddae4a-lib-modules\") pod \"kube-proxy-9zvz6\" (UID: \"9b6deb39-0943-4568-8990-2e5000ddae4a\") " pod="kube-system/kube-proxy-9zvz6" Aug 13 00:15:19.472883 kubelet[2063]: I0813 00:15:19.472360 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-bpf-maps\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.473032 kubelet[2063]: I0813 00:15:19.472376 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hostproc\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.473032 kubelet[2063]: I0813 00:15:19.472389 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-cgroup\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.473032 kubelet[2063]: I0813 00:15:19.472404 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-net\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.473032 kubelet[2063]: I0813 00:15:19.472494 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-run\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.473032 kubelet[2063]: I0813 00:15:19.472548 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jwl\" (UniqueName: \"kubernetes.io/projected/9b6deb39-0943-4568-8990-2e5000ddae4a-kube-api-access-l9jwl\") pod \"kube-proxy-9zvz6\" (UID: \"9b6deb39-0943-4568-8990-2e5000ddae4a\") " pod="kube-system/kube-proxy-9zvz6" Aug 13 00:15:19.473032 kubelet[2063]: I0813 00:15:19.472574 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-xtables-lock\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.473159 kubelet[2063]: I0813 00:15:19.472592 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hubble-tls\") pod \"cilium-cbwn4\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " pod="kube-system/cilium-cbwn4" Aug 13 00:15:19.573146 kubelet[2063]: I0813 00:15:19.573106 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cwtw\" (UniqueName: \"kubernetes.io/projected/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-kube-api-access-4cwtw\") pod \"cilium-operator-5d85765b45-mnxmx\" (UID: \"5bdeba23-abd0-4d90-a0b4-4e2d0d30934d\") " pod="kube-system/cilium-operator-5d85765b45-mnxmx" Aug 13 00:15:19.573268 kubelet[2063]: I0813 00:15:19.573178 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-cilium-config-path\") pod \"cilium-operator-5d85765b45-mnxmx\" (UID: \"5bdeba23-abd0-4d90-a0b4-4e2d0d30934d\") " pod="kube-system/cilium-operator-5d85765b45-mnxmx" Aug 13 00:15:19.574114 kubelet[2063]: I0813 00:15:19.574086 2063 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:15:19.671401 kubelet[2063]: E0813 00:15:19.671362 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:19.672095 env[1319]: time="2025-08-13T00:15:19.672050029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zvz6,Uid:9b6deb39-0943-4568-8990-2e5000ddae4a,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:19.679628 kubelet[2063]: E0813 00:15:19.679601 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:19.680739 env[1319]: time="2025-08-13T00:15:19.680687292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbwn4,Uid:d24ebe88-bea5-4d65-a553-f6f4fe520ad9,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:19.693819 env[1319]: time="2025-08-13T00:15:19.693734971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:19.693952 env[1319]: time="2025-08-13T00:15:19.693835736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:19.693952 env[1319]: time="2025-08-13T00:15:19.693864697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:19.694170 env[1319]: time="2025-08-13T00:15:19.694138591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1966a3d457322aa2175ff3e7e4c8f71fd0cc47ff0bbb062449599bb5e260166 pid=2160 runtime=io.containerd.runc.v2 Aug 13 00:15:19.704330 env[1319]: time="2025-08-13T00:15:19.704177802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:19.704330 env[1319]: time="2025-08-13T00:15:19.704236285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:19.704330 env[1319]: time="2025-08-13T00:15:19.704246486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:19.705433 env[1319]: time="2025-08-13T00:15:19.704473817Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb pid=2185 runtime=io.containerd.runc.v2 Aug 13 00:15:19.755669 env[1319]: time="2025-08-13T00:15:19.754279256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zvz6,Uid:9b6deb39-0943-4568-8990-2e5000ddae4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1966a3d457322aa2175ff3e7e4c8f71fd0cc47ff0bbb062449599bb5e260166\"" Aug 13 00:15:19.755987 kubelet[2063]: E0813 00:15:19.755732 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:19.760670 env[1319]: time="2025-08-13T00:15:19.760284510Z" level=info msg="CreateContainer within sandbox \"b1966a3d457322aa2175ff3e7e4c8f71fd0cc47ff0bbb062449599bb5e260166\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:15:19.763413 env[1319]: time="2025-08-13T00:15:19.763374901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbwn4,Uid:d24ebe88-bea5-4d65-a553-f6f4fe520ad9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\"" Aug 13 00:15:19.764405 kubelet[2063]: E0813 00:15:19.764304 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:19.777655 env[1319]: time="2025-08-13T00:15:19.776855001Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:15:19.777823 kubelet[2063]: E0813 00:15:19.777266 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:19.778028 env[1319]: time="2025-08-13T00:15:19.777986977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mnxmx,Uid:5bdeba23-abd0-4d90-a0b4-4e2d0d30934d,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:19.781384 env[1319]: time="2025-08-13T00:15:19.781305139Z" level=info msg="CreateContainer within sandbox \"b1966a3d457322aa2175ff3e7e4c8f71fd0cc47ff0bbb062449599bb5e260166\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc035c1c4431a18e431856a8b288e66fde174c5222a5300d5b51576d07570e63\"" Aug 13 00:15:19.782119 env[1319]: time="2025-08-13T00:15:19.782089217Z" level=info msg="StartContainer for \"dc035c1c4431a18e431856a8b288e66fde174c5222a5300d5b51576d07570e63\"" Aug 13 00:15:19.827702 env[1319]: time="2025-08-13T00:15:19.827499681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:19.827702 env[1319]: time="2025-08-13T00:15:19.827541043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:19.827702 env[1319]: time="2025-08-13T00:15:19.827552004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:19.827889 env[1319]: time="2025-08-13T00:15:19.827787375Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def pid=2266 runtime=io.containerd.runc.v2 Aug 13 00:15:19.841428 env[1319]: time="2025-08-13T00:15:19.841371680Z" level=info msg="StartContainer for \"dc035c1c4431a18e431856a8b288e66fde174c5222a5300d5b51576d07570e63\" returns successfully" Aug 13 00:15:19.914832 env[1319]: time="2025-08-13T00:15:19.914783235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mnxmx,Uid:5bdeba23-abd0-4d90-a0b4-4e2d0d30934d,Namespace:kube-system,Attempt:0,} returns sandbox id \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\"" Aug 13 00:15:19.915464 kubelet[2063]: E0813 00:15:19.915445 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:20.259976 kubelet[2063]: E0813 00:15:20.259943 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:20.272213 kubelet[2063]: I0813 00:15:20.272148 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9zvz6" podStartSLOduration=1.272130292 podStartE2EDuration="1.272130292s" podCreationTimestamp="2025-08-13 00:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:15:20.271998606 +0000 UTC m=+8.130292598" watchObservedRunningTime="2025-08-13 00:15:20.272130292 +0000 UTC m=+8.130424284" Aug 13 00:15:20.429013 kubelet[2063]: E0813 00:15:20.428978 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:21.264109 kubelet[2063]: E0813 00:15:21.264075 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:24.170987 kubelet[2063]: E0813 00:15:24.170948 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:25.093604 kubelet[2063]: E0813 00:15:25.093537 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:27.989568 update_engine[1312]: I0813 00:15:27.989516 1312 update_attempter.cc:509] Updating boot flags... Aug 13 00:15:30.690879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250927536.mount: Deactivated successfully. Aug 13 00:15:39.447146 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:33058.service. Aug 13 00:15:39.501432 sshd[2455]: Accepted publickey for core from 10.0.0.1 port 33058 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:15:39.502975 sshd[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:15:39.507940 systemd-logind[1303]: New session 6 of user core. Aug 13 00:15:39.508844 systemd[1]: Started session-6.scope. Aug 13 00:15:39.639719 sshd[2455]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:39.643490 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:33058.service: Deactivated successfully. Aug 13 00:15:39.644592 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:15:39.644606 systemd-logind[1303]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:15:39.645772 systemd-logind[1303]: Removed session 6. Aug 13 00:15:44.642813 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:58814.service. Aug 13 00:15:44.689634 sshd[2487]: Accepted publickey for core from 10.0.0.1 port 58814 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:15:44.691163 sshd[2487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:15:44.695737 systemd-logind[1303]: New session 7 of user core. Aug 13 00:15:44.696040 systemd[1]: Started session-7.scope. Aug 13 00:15:44.813670 sshd[2487]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:44.816596 systemd-logind[1303]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:15:44.816633 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:58814.service: Deactivated successfully. Aug 13 00:15:44.817505 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:15:44.817977 systemd-logind[1303]: Removed session 7. Aug 13 00:15:48.523814 env[1319]: time="2025-08-13T00:15:48.523730617Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:48.525837 env[1319]: time="2025-08-13T00:15:48.525785603Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:48.528111 env[1319]: time="2025-08-13T00:15:48.528065353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:48.528634 env[1319]: time="2025-08-13T00:15:48.528591480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 00:15:48.530874 env[1319]: time="2025-08-13T00:15:48.530540145Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:15:48.539725 env[1319]: time="2025-08-13T00:15:48.539680704Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:15:48.561181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376928197.mount: Deactivated successfully. Aug 13 00:15:48.563672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460852832.mount: Deactivated successfully. Aug 13 00:15:48.566232 env[1319]: time="2025-08-13T00:15:48.566178368Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\"" Aug 13 00:15:48.566931 env[1319]: time="2025-08-13T00:15:48.566894977Z" level=info msg="StartContainer for \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\"" Aug 13 00:15:48.689782 env[1319]: time="2025-08-13T00:15:48.687463104Z" level=info msg="StartContainer for \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\" returns successfully" Aug 13 00:15:48.781883 env[1319]: time="2025-08-13T00:15:48.781737889Z" level=info msg="shim disconnected" id=892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372 Aug 13 00:15:48.781883 env[1319]: time="2025-08-13T00:15:48.781803090Z" level=warning msg="cleaning up after shim disconnected" id=892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372 namespace=k8s.io Aug 13 00:15:48.781883 env[1319]: time="2025-08-13T00:15:48.781814450Z" level=info msg="cleaning up dead shim" Aug 13 00:15:48.790776 env[1319]: time="2025-08-13T00:15:48.790712405Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:15:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2551 runtime=io.containerd.runc.v2\n" Aug 13 00:15:49.313673 kubelet[2063]: E0813 00:15:49.313640 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:49.316397 env[1319]: time="2025-08-13T00:15:49.316335188Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:15:49.334981 env[1319]: time="2025-08-13T00:15:49.334915702Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\"" Aug 13 00:15:49.335720 env[1319]: time="2025-08-13T00:15:49.335688832Z" level=info msg="StartContainer for \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\"" Aug 13 00:15:49.391134 env[1319]: time="2025-08-13T00:15:49.391077449Z" level=info msg="StartContainer for \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\" returns successfully" Aug 13 00:15:49.403649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:15:49.404330 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:15:49.404547 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:15:49.406456 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:15:49.416661 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:15:49.430176 env[1319]: time="2025-08-13T00:15:49.430114660Z" level=info msg="shim disconnected" id=907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa Aug 13 00:15:49.430176 env[1319]: time="2025-08-13T00:15:49.430175781Z" level=warning msg="cleaning up after shim disconnected" id=907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa namespace=k8s.io Aug 13 00:15:49.430176 env[1319]: time="2025-08-13T00:15:49.430186181Z" level=info msg="cleaning up dead shim" Aug 13 00:15:49.438898 env[1319]: time="2025-08-13T00:15:49.438850210Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:15:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2616 runtime=io.containerd.runc.v2\n" Aug 13 00:15:49.558341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372-rootfs.mount: Deactivated successfully. Aug 13 00:15:49.816758 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:58828.service. Aug 13 00:15:49.866821 sshd[2629]: Accepted publickey for core from 10.0.0.1 port 58828 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:15:49.868395 sshd[2629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:15:49.873000 systemd-logind[1303]: New session 8 of user core. Aug 13 00:15:49.873466 systemd[1]: Started session-8.scope. Aug 13 00:15:50.005234 sshd[2629]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:50.007790 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:58828.service: Deactivated successfully. Aug 13 00:15:50.008855 systemd-logind[1303]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:15:50.008857 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:15:50.010107 systemd-logind[1303]: Removed session 8. Aug 13 00:15:50.093098 env[1319]: time="2025-08-13T00:15:50.092949726Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:50.094856 env[1319]: time="2025-08-13T00:15:50.094810349Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:50.096939 env[1319]: time="2025-08-13T00:15:50.096899854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:15:50.097463 env[1319]: time="2025-08-13T00:15:50.097428941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 00:15:50.100635 env[1319]: time="2025-08-13T00:15:50.100037212Z" level=info msg="CreateContainer within sandbox \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:15:50.111712 env[1319]: time="2025-08-13T00:15:50.111041427Z" level=info msg="CreateContainer within sandbox \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\"" Aug 13 00:15:50.115224 env[1319]: time="2025-08-13T00:15:50.113462816Z" level=info msg="StartContainer for \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\"" Aug 13 00:15:50.208001 env[1319]: time="2025-08-13T00:15:50.207945689Z" level=info msg="StartContainer for \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\" returns successfully" Aug 13 00:15:50.317230 kubelet[2063]: E0813 00:15:50.317192 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:50.321342 kubelet[2063]: E0813 00:15:50.321251 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:50.323661 env[1319]: time="2025-08-13T00:15:50.323607180Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:15:50.352913 env[1319]: time="2025-08-13T00:15:50.352786376Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\"" Aug 13 00:15:50.353995 env[1319]: time="2025-08-13T00:15:50.353952830Z" level=info msg="StartContainer for \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\"" Aug 13 00:15:50.410939 kubelet[2063]: I0813 00:15:50.410862 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mnxmx" podStartSLOduration=1.2286951130000001 podStartE2EDuration="31.410843164s" podCreationTimestamp="2025-08-13 00:15:19 +0000 UTC" firstStartedPulling="2025-08-13 00:15:19.91611458 +0000 UTC m=+7.774408612" lastFinishedPulling="2025-08-13 00:15:50.098262631 +0000 UTC m=+37.956556663" observedRunningTime="2025-08-13 00:15:50.350787672 +0000 UTC m=+38.209081704" watchObservedRunningTime="2025-08-13 00:15:50.410843164 +0000 UTC m=+38.269137196" Aug 13 00:15:50.481863 env[1319]: time="2025-08-13T00:15:50.481684429Z" level=info msg="StartContainer for \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\" returns successfully" Aug 13 00:15:50.520896 env[1319]: time="2025-08-13T00:15:50.520840426Z" level=info msg="shim disconnected" id=d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415 Aug 13 00:15:50.520896 env[1319]: time="2025-08-13T00:15:50.520884747Z" level=warning msg="cleaning up after shim disconnected" id=d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415 namespace=k8s.io Aug 13 00:15:50.520896 env[1319]: time="2025-08-13T00:15:50.520894307Z" level=info msg="cleaning up dead shim" Aug 13 00:15:50.528781 env[1319]: time="2025-08-13T00:15:50.528720602Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:15:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2729 runtime=io.containerd.runc.v2\n" Aug 13 00:15:51.325182 kubelet[2063]: E0813 00:15:51.325133 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:51.325182 kubelet[2063]: E0813 00:15:51.325186 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:51.330950 env[1319]: time="2025-08-13T00:15:51.326830783Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:15:51.347870 env[1319]: time="2025-08-13T00:15:51.347739310Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\"" Aug 13 00:15:51.349857 env[1319]: time="2025-08-13T00:15:51.349820855Z" level=info msg="StartContainer for \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\"" Aug 13 00:15:51.405968 env[1319]: time="2025-08-13T00:15:51.405916879Z" level=info msg="StartContainer for \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\" returns successfully" Aug 13 00:15:51.427624 env[1319]: time="2025-08-13T00:15:51.427561016Z" level=info msg="shim disconnected" id=8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff Aug 13 00:15:51.427624 env[1319]: time="2025-08-13T00:15:51.427625576Z" level=warning msg="cleaning up after shim disconnected" id=8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff namespace=k8s.io Aug 13 00:15:51.427956 env[1319]: time="2025-08-13T00:15:51.427637457Z" level=info msg="cleaning up dead shim" Aug 13 00:15:51.437214 env[1319]: time="2025-08-13T00:15:51.437162089Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:15:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2782 runtime=io.containerd.runc.v2\n" Aug 13 00:15:51.558131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff-rootfs.mount: Deactivated successfully. Aug 13 00:15:52.338250 kubelet[2063]: E0813 00:15:52.335424 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:52.341996 env[1319]: time="2025-08-13T00:15:52.341953889Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:15:52.361119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092467628.mount: Deactivated successfully. Aug 13 00:15:52.368784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72538858.mount: Deactivated successfully. Aug 13 00:15:52.372603 env[1319]: time="2025-08-13T00:15:52.372550401Z" level=info msg="CreateContainer within sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\"" Aug 13 00:15:52.373598 env[1319]: time="2025-08-13T00:15:52.373542612Z" level=info msg="StartContainer for \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\"" Aug 13 00:15:52.483915 env[1319]: time="2025-08-13T00:15:52.482517986Z" level=info msg="StartContainer for \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\" returns successfully" Aug 13 00:15:52.646408 kubelet[2063]: I0813 00:15:52.646291 2063 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:15:52.814905 kubelet[2063]: I0813 00:15:52.814845 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a05aef7-e99e-4c7d-9b61-e0d1ff46a3a4-config-volume\") pod \"coredns-7c65d6cfc9-xrq8p\" (UID: \"6a05aef7-e99e-4c7d-9b61-e0d1ff46a3a4\") " pod="kube-system/coredns-7c65d6cfc9-xrq8p" Aug 13 00:15:52.814905 kubelet[2063]: I0813 00:15:52.814899 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4faa2f2-998d-42e8-8ca4-83caa17dafe5-config-volume\") pod \"coredns-7c65d6cfc9-rmb2g\" (UID: \"b4faa2f2-998d-42e8-8ca4-83caa17dafe5\") " pod="kube-system/coredns-7c65d6cfc9-rmb2g" Aug 13 00:15:52.815354 kubelet[2063]: I0813 00:15:52.814921 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszcr\" (UniqueName: \"kubernetes.io/projected/b4faa2f2-998d-42e8-8ca4-83caa17dafe5-kube-api-access-vszcr\") pod \"coredns-7c65d6cfc9-rmb2g\" (UID: \"b4faa2f2-998d-42e8-8ca4-83caa17dafe5\") " pod="kube-system/coredns-7c65d6cfc9-rmb2g" Aug 13 00:15:52.815354 kubelet[2063]: I0813 00:15:52.814944 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2lr\" (UniqueName: \"kubernetes.io/projected/6a05aef7-e99e-4c7d-9b61-e0d1ff46a3a4-kube-api-access-8z2lr\") pod \"coredns-7c65d6cfc9-xrq8p\" (UID: \"6a05aef7-e99e-4c7d-9b61-e0d1ff46a3a4\") " pod="kube-system/coredns-7c65d6cfc9-xrq8p" Aug 13 00:15:52.920781 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:15:52.999692 kubelet[2063]: E0813 00:15:52.999646 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:53.000695 env[1319]: time="2025-08-13T00:15:53.000640146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xrq8p,Uid:6a05aef7-e99e-4c7d-9b61-e0d1ff46a3a4,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.008478 kubelet[2063]: E0813 00:15:53.008442 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:53.009034 env[1319]: time="2025-08-13T00:15:53.008999000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rmb2g,Uid:b4faa2f2-998d-42e8-8ca4-83caa17dafe5,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.232792 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:15:53.339149 kubelet[2063]: E0813 00:15:53.339049 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:53.354895 kubelet[2063]: I0813 00:15:53.354660 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cbwn4" podStartSLOduration=5.600694542 podStartE2EDuration="34.354640788s" podCreationTimestamp="2025-08-13 00:15:19 +0000 UTC" firstStartedPulling="2025-08-13 00:15:19.776340696 +0000 UTC m=+7.634634728" lastFinishedPulling="2025-08-13 00:15:48.530286942 +0000 UTC m=+36.388580974" observedRunningTime="2025-08-13 00:15:53.35397374 +0000 UTC m=+41.212267732" watchObservedRunningTime="2025-08-13 00:15:53.354640788 +0000 UTC m=+41.212934820" Aug 13 00:15:54.351587 kubelet[2063]: E0813 00:15:54.341210 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:54.856916 systemd-networkd[1097]: cilium_host: Link UP Aug 13 00:15:54.858389 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 00:15:54.858447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:15:54.857128 systemd-networkd[1097]: cilium_net: Link UP Aug 13 00:15:54.857283 systemd-networkd[1097]: cilium_net: Gained carrier Aug 13 00:15:54.857512 systemd-networkd[1097]: cilium_host: Gained carrier Aug 13 00:15:54.953517 systemd-networkd[1097]: cilium_vxlan: Link UP Aug 13 00:15:54.953524 systemd-networkd[1097]: cilium_vxlan: Gained carrier Aug 13 00:15:55.008584 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:33906.service. Aug 13 00:15:55.060041 sshd[3065]: Accepted publickey for core from 10.0.0.1 port 33906 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:15:55.061142 sshd[3065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:15:55.066261 systemd[1]: Started session-9.scope. Aug 13 00:15:55.066676 systemd-logind[1303]: New session 9 of user core. Aug 13 00:15:55.093866 systemd-networkd[1097]: cilium_net: Gained IPv6LL Aug 13 00:15:55.196668 sshd[3065]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:55.199729 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:33906.service: Deactivated successfully. Aug 13 00:15:55.200803 systemd-logind[1303]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:15:55.200864 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:15:55.201707 systemd-logind[1303]: Removed session 9. Aug 13 00:15:55.279784 kernel: NET: Registered PF_ALG protocol family Aug 13 00:15:55.343421 kubelet[2063]: E0813 00:15:55.343339 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:55.671883 systemd-networkd[1097]: cilium_host: Gained IPv6LL Aug 13 00:15:55.886898 systemd-networkd[1097]: lxc_health: Link UP Aug 13 00:15:55.906799 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:15:55.906320 systemd-networkd[1097]: lxc_health: Gained carrier Aug 13 00:15:56.127779 systemd-networkd[1097]: lxc88b356e7350b: Link UP Aug 13 00:15:56.138770 kernel: eth0: renamed from tmp9c993 Aug 13 00:15:56.147899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc88b356e7350b: link becomes ready Aug 13 00:15:56.148463 systemd-networkd[1097]: lxc88b356e7350b: Gained carrier Aug 13 00:15:56.152990 systemd-networkd[1097]: lxcc5c48092469b: Link UP Aug 13 00:15:56.165800 kernel: eth0: renamed from tmpea94b Aug 13 00:15:56.172834 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc5c48092469b: link becomes ready Aug 13 00:15:56.173116 systemd-networkd[1097]: lxcc5c48092469b: Gained carrier Aug 13 00:15:56.375950 systemd-networkd[1097]: cilium_vxlan: Gained IPv6LL Aug 13 00:15:57.463916 systemd-networkd[1097]: lxcc5c48092469b: Gained IPv6LL Aug 13 00:15:57.655921 systemd-networkd[1097]: lxc_health: Gained IPv6LL Aug 13 00:15:57.686123 kubelet[2063]: E0813 00:15:57.686022 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:57.975978 systemd-networkd[1097]: lxc88b356e7350b: Gained IPv6LL Aug 13 00:15:58.347124 kubelet[2063]: E0813 00:15:58.347014 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:59.349041 kubelet[2063]: E0813 00:15:59.349004 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:15:59.956036 env[1319]: time="2025-08-13T00:15:59.955974944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:59.956463 env[1319]: time="2025-08-13T00:15:59.956017225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:59.956463 env[1319]: time="2025-08-13T00:15:59.956028225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:59.956463 env[1319]: time="2025-08-13T00:15:59.956156106Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c9934c06dee469ec70c7f20f3f82fab5617e88dc161fc9dad80b064e654b291 pid=3378 runtime=io.containerd.runc.v2 Aug 13 00:15:59.960454 env[1319]: time="2025-08-13T00:15:59.960127905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:59.960454 env[1319]: time="2025-08-13T00:15:59.960329587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:59.960454 env[1319]: time="2025-08-13T00:15:59.960341667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:59.960873 env[1319]: time="2025-08-13T00:15:59.960663190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea94bfa6bb6c53174396d3d733ad77a838fc420d68779e008e404b9b76178437 pid=3394 runtime=io.containerd.runc.v2 Aug 13 00:16:00.029121 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:16:00.036584 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:16:00.052545 env[1319]: time="2025-08-13T00:16:00.052505988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xrq8p,Uid:6a05aef7-e99e-4c7d-9b61-e0d1ff46a3a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea94bfa6bb6c53174396d3d733ad77a838fc420d68779e008e404b9b76178437\"" Aug 13 00:16:00.053612 kubelet[2063]: E0813 00:16:00.053570 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:00.059386 env[1319]: time="2025-08-13T00:16:00.059339612Z" level=info msg="CreateContainer within sandbox \"ea94bfa6bb6c53174396d3d733ad77a838fc420d68779e008e404b9b76178437\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:16:00.062856 env[1319]: time="2025-08-13T00:16:00.062816525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rmb2g,Uid:b4faa2f2-998d-42e8-8ca4-83caa17dafe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c9934c06dee469ec70c7f20f3f82fab5617e88dc161fc9dad80b064e654b291\"" Aug 13 00:16:00.063731 kubelet[2063]: E0813 00:16:00.063704 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:00.065809 env[1319]: time="2025-08-13T00:16:00.065773593Z" level=info msg="CreateContainer within sandbox \"9c9934c06dee469ec70c7f20f3f82fab5617e88dc161fc9dad80b064e654b291\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:16:00.080391 env[1319]: time="2025-08-13T00:16:00.080320251Z" level=info msg="CreateContainer within sandbox \"ea94bfa6bb6c53174396d3d733ad77a838fc420d68779e008e404b9b76178437\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7047b5d8b1a2730a0d891b6854a06aabc1395620c55dcc1310edfe236a2a1a88\"" Aug 13 00:16:00.081131 env[1319]: time="2025-08-13T00:16:00.081060538Z" level=info msg="StartContainer for \"7047b5d8b1a2730a0d891b6854a06aabc1395620c55dcc1310edfe236a2a1a88\"" Aug 13 00:16:00.087896 env[1319]: time="2025-08-13T00:16:00.087838122Z" level=info msg="CreateContainer within sandbox \"9c9934c06dee469ec70c7f20f3f82fab5617e88dc161fc9dad80b064e654b291\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6325ad3a0ecd40b2c4d93a94d2d99f66cd7465c8b0ad244662787c695327e550\"" Aug 13 00:16:00.088372 env[1319]: time="2025-08-13T00:16:00.088343967Z" level=info msg="StartContainer for \"6325ad3a0ecd40b2c4d93a94d2d99f66cd7465c8b0ad244662787c695327e550\"" Aug 13 00:16:00.166066 env[1319]: time="2025-08-13T00:16:00.166019982Z" level=info msg="StartContainer for \"6325ad3a0ecd40b2c4d93a94d2d99f66cd7465c8b0ad244662787c695327e550\" returns successfully" Aug 13 00:16:00.168128 env[1319]: time="2025-08-13T00:16:00.166861670Z" level=info msg="StartContainer for \"7047b5d8b1a2730a0d891b6854a06aabc1395620c55dcc1310edfe236a2a1a88\" returns successfully" Aug 13 00:16:00.200539 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:33910.service. Aug 13 00:16:00.256013 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 33910 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:00.259588 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:00.266679 systemd-logind[1303]: New session 10 of user core. Aug 13 00:16:00.267631 systemd[1]: Started session-10.scope. Aug 13 00:16:00.358579 kubelet[2063]: E0813 00:16:00.358496 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:00.370970 kubelet[2063]: E0813 00:16:00.364949 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:00.414605 kubelet[2063]: I0813 00:16:00.414515 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xrq8p" podStartSLOduration=41.414497734 podStartE2EDuration="41.414497734s" podCreationTimestamp="2025-08-13 00:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:00.39077955 +0000 UTC m=+48.249073622" watchObservedRunningTime="2025-08-13 00:16:00.414497734 +0000 UTC m=+48.272791766" Aug 13 00:16:00.414800 kubelet[2063]: I0813 00:16:00.414636 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rmb2g" podStartSLOduration=41.414623735 podStartE2EDuration="41.414623735s" podCreationTimestamp="2025-08-13 00:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:00.412440195 +0000 UTC m=+48.270734267" watchObservedRunningTime="2025-08-13 00:16:00.414623735 +0000 UTC m=+48.272917767" Aug 13 00:16:00.451993 sshd[3520]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:00.455083 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:33910.service: Deactivated successfully. Aug 13 00:16:00.456173 systemd-logind[1303]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:16:00.456207 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:16:00.456984 systemd-logind[1303]: Removed session 10. Aug 13 00:16:00.964291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1044389542.mount: Deactivated successfully. Aug 13 00:16:01.372678 kubelet[2063]: E0813 00:16:01.372577 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:01.373160 kubelet[2063]: E0813 00:16:01.373011 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:02.373736 kubelet[2063]: E0813 00:16:02.373701 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:02.374109 kubelet[2063]: E0813 00:16:02.374015 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:05.455116 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:42414.service. Aug 13 00:16:05.499723 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 42414 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:05.501366 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:05.505718 systemd[1]: Started session-11.scope. Aug 13 00:16:05.506055 systemd-logind[1303]: New session 11 of user core. Aug 13 00:16:05.620999 sshd[3557]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:05.624011 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:42414.service: Deactivated successfully. Aug 13 00:16:05.625032 systemd-logind[1303]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:16:05.625096 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:16:05.625837 systemd-logind[1303]: Removed session 11. Aug 13 00:16:10.625101 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:42422.service. Aug 13 00:16:10.670850 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 42422 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:10.672098 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:10.679045 systemd-logind[1303]: New session 12 of user core. Aug 13 00:16:10.679222 systemd[1]: Started session-12.scope. Aug 13 00:16:10.804503 sshd[3572]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:10.807022 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:42422.service: Deactivated successfully. Aug 13 00:16:10.808120 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:16:10.808231 systemd-logind[1303]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:16:10.809622 systemd-logind[1303]: Removed session 12. Aug 13 00:16:15.807488 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:43758.service. Aug 13 00:16:15.862566 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 43758 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:15.863965 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:15.869958 systemd[1]: Started session-13.scope. Aug 13 00:16:15.870431 systemd-logind[1303]: New session 13 of user core. Aug 13 00:16:15.987170 sshd[3589]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:15.989578 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:43758.service: Deactivated successfully. Aug 13 00:16:15.990442 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:16:15.992894 systemd-logind[1303]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:16:15.994429 systemd-logind[1303]: Removed session 13. Aug 13 00:16:20.997690 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:43764.service. Aug 13 00:16:21.042034 sshd[3607]: Accepted publickey for core from 10.0.0.1 port 43764 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:21.043359 sshd[3607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:21.047473 systemd-logind[1303]: New session 14 of user core. Aug 13 00:16:21.048693 systemd[1]: Started session-14.scope. Aug 13 00:16:21.173010 sshd[3607]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:21.175904 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:43780.service. Aug 13 00:16:21.177375 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:43764.service: Deactivated successfully. Aug 13 00:16:21.178851 systemd-logind[1303]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:16:21.178918 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:16:21.182953 systemd-logind[1303]: Removed session 14. Aug 13 00:16:21.225428 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 43780 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:21.227315 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:21.231287 systemd-logind[1303]: New session 15 of user core. Aug 13 00:16:21.232151 systemd[1]: Started session-15.scope. Aug 13 00:16:21.403316 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:43790.service. Aug 13 00:16:21.404372 sshd[3620]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:21.409212 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:43780.service: Deactivated successfully. Aug 13 00:16:21.410256 systemd-logind[1303]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:16:21.410392 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:16:21.417672 systemd-logind[1303]: Removed session 15. Aug 13 00:16:21.455317 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 43790 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:21.456829 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:21.460761 systemd-logind[1303]: New session 16 of user core. Aug 13 00:16:21.461615 systemd[1]: Started session-16.scope. Aug 13 00:16:21.588078 sshd[3633]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:21.591181 systemd-logind[1303]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:16:21.591497 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:43790.service: Deactivated successfully. Aug 13 00:16:21.592441 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:16:21.593503 systemd-logind[1303]: Removed session 16. Aug 13 00:16:26.591464 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:33784.service. Aug 13 00:16:26.634993 sshd[3650]: Accepted publickey for core from 10.0.0.1 port 33784 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:26.636758 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:26.641585 systemd-logind[1303]: New session 17 of user core. Aug 13 00:16:26.642055 systemd[1]: Started session-17.scope. Aug 13 00:16:26.759351 sshd[3650]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:26.761813 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:33784.service: Deactivated successfully. Aug 13 00:16:26.762864 systemd-logind[1303]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:16:26.762927 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:16:26.763969 systemd-logind[1303]: Removed session 17. Aug 13 00:16:31.762590 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:33800.service. Aug 13 00:16:31.805283 sshd[3664]: Accepted publickey for core from 10.0.0.1 port 33800 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:31.807030 sshd[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:31.811357 systemd-logind[1303]: New session 18 of user core. Aug 13 00:16:31.811931 systemd[1]: Started session-18.scope. Aug 13 00:16:31.930164 sshd[3664]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:31.933066 systemd[1]: Started sshd@18-10.0.0.125:22-10.0.0.1:33802.service. Aug 13 00:16:31.934108 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:33800.service: Deactivated successfully. Aug 13 00:16:31.935175 systemd-logind[1303]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:16:31.935238 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:16:31.936251 systemd-logind[1303]: Removed session 18. Aug 13 00:16:31.976133 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 33802 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:31.977841 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:31.981634 systemd-logind[1303]: New session 19 of user core. Aug 13 00:16:31.982517 systemd[1]: Started session-19.scope. Aug 13 00:16:32.179468 sshd[3676]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:32.182083 systemd[1]: Started sshd@19-10.0.0.125:22-10.0.0.1:33814.service. Aug 13 00:16:32.183971 systemd[1]: sshd@18-10.0.0.125:22-10.0.0.1:33802.service: Deactivated successfully. Aug 13 00:16:32.185026 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:16:32.185092 systemd-logind[1303]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:16:32.186050 systemd-logind[1303]: Removed session 19. Aug 13 00:16:32.229224 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 33814 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:32.230724 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:32.235766 systemd-logind[1303]: New session 20 of user core. Aug 13 00:16:32.236179 systemd[1]: Started session-20.scope. Aug 13 00:16:33.391914 sshd[3688]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:33.392934 systemd[1]: Started sshd@20-10.0.0.125:22-10.0.0.1:40794.service. Aug 13 00:16:33.400215 systemd[1]: sshd@19-10.0.0.125:22-10.0.0.1:33814.service: Deactivated successfully. Aug 13 00:16:33.401942 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:16:33.401996 systemd-logind[1303]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:16:33.403233 systemd-logind[1303]: Removed session 20. Aug 13 00:16:33.445049 sshd[3708]: Accepted publickey for core from 10.0.0.1 port 40794 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:33.446461 sshd[3708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:33.450943 systemd-logind[1303]: New session 21 of user core. Aug 13 00:16:33.451777 systemd[1]: Started session-21.scope. Aug 13 00:16:33.717494 sshd[3708]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:33.720860 systemd[1]: Started sshd@21-10.0.0.125:22-10.0.0.1:40800.service. Aug 13 00:16:33.722549 systemd-logind[1303]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:16:33.722778 systemd[1]: sshd@20-10.0.0.125:22-10.0.0.1:40794.service: Deactivated successfully. Aug 13 00:16:33.723629 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:16:33.724075 systemd-logind[1303]: Removed session 21. Aug 13 00:16:33.767684 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 40800 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:33.769114 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:33.773575 systemd[1]: Started session-22.scope. Aug 13 00:16:33.774135 systemd-logind[1303]: New session 22 of user core. Aug 13 00:16:33.892351 sshd[3722]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:33.895146 systemd[1]: sshd@21-10.0.0.125:22-10.0.0.1:40800.service: Deactivated successfully. Aug 13 00:16:33.895931 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:16:33.895976 systemd-logind[1303]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:16:33.896950 systemd-logind[1303]: Removed session 22. Aug 13 00:16:34.223332 kubelet[2063]: E0813 00:16:34.223295 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:38.896297 systemd[1]: Started sshd@22-10.0.0.125:22-10.0.0.1:40802.service. Aug 13 00:16:38.940447 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 40802 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:38.941855 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:38.945896 systemd-logind[1303]: New session 23 of user core. Aug 13 00:16:38.946811 systemd[1]: Started session-23.scope. Aug 13 00:16:39.057921 sshd[3741]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:39.060817 systemd[1]: sshd@22-10.0.0.125:22-10.0.0.1:40802.service: Deactivated successfully. Aug 13 00:16:39.061916 systemd-logind[1303]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:16:39.061970 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:16:39.062808 systemd-logind[1303]: Removed session 23. Aug 13 00:16:44.068065 systemd[1]: Started sshd@23-10.0.0.125:22-10.0.0.1:40158.service. Aug 13 00:16:44.111922 sshd[3755]: Accepted publickey for core from 10.0.0.1 port 40158 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:44.113485 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:44.118260 systemd-logind[1303]: New session 24 of user core. Aug 13 00:16:44.118511 systemd[1]: Started session-24.scope. Aug 13 00:16:44.221515 kubelet[2063]: E0813 00:16:44.221472 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:44.235363 sshd[3755]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:44.238855 systemd-logind[1303]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:16:44.239086 systemd[1]: sshd@23-10.0.0.125:22-10.0.0.1:40158.service: Deactivated successfully. Aug 13 00:16:44.240100 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:16:44.240561 systemd-logind[1303]: Removed session 24. Aug 13 00:16:49.239226 systemd[1]: Started sshd@24-10.0.0.125:22-10.0.0.1:40174.service. Aug 13 00:16:49.284479 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 40174 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:49.288617 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:49.293845 systemd-logind[1303]: New session 25 of user core. Aug 13 00:16:49.297945 systemd[1]: Started session-25.scope. Aug 13 00:16:49.438114 sshd[3769]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:49.441086 systemd[1]: sshd@24-10.0.0.125:22-10.0.0.1:40174.service: Deactivated successfully. Aug 13 00:16:49.441966 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:16:49.444854 systemd-logind[1303]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:16:49.446383 systemd-logind[1303]: Removed session 25. Aug 13 00:16:50.221278 kubelet[2063]: E0813 00:16:50.221236 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:50.222695 kubelet[2063]: E0813 00:16:50.222648 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:16:54.438672 systemd[1]: Started sshd@25-10.0.0.125:22-10.0.0.1:44902.service. Aug 13 00:16:54.482580 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 44902 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:54.484911 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:54.490187 systemd-logind[1303]: New session 26 of user core. Aug 13 00:16:54.491986 systemd[1]: Started session-26.scope. Aug 13 00:16:54.600551 sshd[3786]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:54.603344 systemd[1]: Started sshd@26-10.0.0.125:22-10.0.0.1:44918.service. Aug 13 00:16:54.604054 systemd[1]: sshd@25-10.0.0.125:22-10.0.0.1:44902.service: Deactivated successfully. Aug 13 00:16:54.605430 systemd-logind[1303]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:16:54.605475 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:16:54.606705 systemd-logind[1303]: Removed session 26. Aug 13 00:16:54.648760 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 44918 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:54.650416 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:54.654597 systemd-logind[1303]: New session 27 of user core. Aug 13 00:16:54.655161 systemd[1]: Started session-27.scope. Aug 13 00:16:56.904133 env[1319]: time="2025-08-13T00:16:56.904063082Z" level=info msg="StopContainer for \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\" with timeout 30 (s)" Aug 13 00:16:56.904549 env[1319]: time="2025-08-13T00:16:56.904482046Z" level=info msg="Stop container \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\" with signal terminated" Aug 13 00:16:56.955995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6-rootfs.mount: Deactivated successfully. Aug 13 00:16:56.972994 env[1319]: time="2025-08-13T00:16:56.972938760Z" level=info msg="shim disconnected" id=8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6 Aug 13 00:16:56.972994 env[1319]: time="2025-08-13T00:16:56.972994000Z" level=warning msg="cleaning up after shim disconnected" id=8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6 namespace=k8s.io Aug 13 00:16:56.973266 env[1319]: time="2025-08-13T00:16:56.973007240Z" level=info msg="cleaning up dead shim" Aug 13 00:16:56.975268 env[1319]: time="2025-08-13T00:16:56.975209740Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:16:56.982014 env[1319]: time="2025-08-13T00:16:56.981956838Z" level=info msg="StopContainer for \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\" with timeout 2 (s)" Aug 13 00:16:56.982437 env[1319]: time="2025-08-13T00:16:56.982407122Z" level=info msg="Stop container \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\" with signal terminated" Aug 13 00:16:56.983489 env[1319]: time="2025-08-13T00:16:56.983452971Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:16:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3844 runtime=io.containerd.runc.v2\n" Aug 13 00:16:56.985819 env[1319]: time="2025-08-13T00:16:56.985775951Z" level=info msg="StopContainer for \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\" returns successfully" Aug 13 00:16:56.986495 env[1319]: time="2025-08-13T00:16:56.986463957Z" level=info msg="StopPodSandbox for \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\"" Aug 13 00:16:56.986681 env[1319]: time="2025-08-13T00:16:56.986656799Z" level=info msg="Container to stop \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:16:56.992009 systemd-networkd[1097]: lxc_health: Link DOWN Aug 13 00:16:56.992020 systemd-networkd[1097]: lxc_health: Lost carrier Aug 13 00:16:56.993174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def-shm.mount: Deactivated successfully. Aug 13 00:16:57.025367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def-rootfs.mount: Deactivated successfully. Aug 13 00:16:57.034340 env[1319]: time="2025-08-13T00:16:57.034267970Z" level=info msg="shim disconnected" id=038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def Aug 13 00:16:57.034340 env[1319]: time="2025-08-13T00:16:57.034332931Z" level=warning msg="cleaning up after shim disconnected" id=038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def namespace=k8s.io Aug 13 00:16:57.034340 env[1319]: time="2025-08-13T00:16:57.034345691Z" level=info msg="cleaning up dead shim" Aug 13 00:16:57.045039 env[1319]: time="2025-08-13T00:16:57.044986583Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:16:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\n" Aug 13 00:16:57.045375 env[1319]: time="2025-08-13T00:16:57.045348346Z" level=info msg="TearDown network for sandbox \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" successfully" Aug 13 00:16:57.045418 env[1319]: time="2025-08-13T00:16:57.045378026Z" level=info msg="StopPodSandbox for \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" returns successfully" Aug 13 00:16:57.054310 env[1319]: time="2025-08-13T00:16:57.054263423Z" level=info msg="shim disconnected" id=481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4 Aug 13 00:16:57.054721 env[1319]: time="2025-08-13T00:16:57.054693827Z" level=warning msg="cleaning up after shim disconnected" id=481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4 namespace=k8s.io Aug 13 00:16:57.055357 env[1319]: time="2025-08-13T00:16:57.055332712Z" level=info msg="cleaning up dead shim" Aug 13 00:16:57.064915 env[1319]: time="2025-08-13T00:16:57.064868634Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:16:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3917 runtime=io.containerd.runc.v2\n" Aug 13 00:16:57.066987 env[1319]: time="2025-08-13T00:16:57.066935092Z" level=info msg="StopContainer for \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\" returns successfully" Aug 13 00:16:57.067775 env[1319]: time="2025-08-13T00:16:57.067698099Z" level=info msg="StopPodSandbox for \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\"" Aug 13 00:16:57.067882 env[1319]: time="2025-08-13T00:16:57.067796020Z" level=info msg="Container to stop \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:16:57.067882 env[1319]: time="2025-08-13T00:16:57.067818300Z" level=info msg="Container to stop \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:16:57.067882 env[1319]: time="2025-08-13T00:16:57.067831180Z" level=info msg="Container to stop \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:16:57.067882 env[1319]: time="2025-08-13T00:16:57.067848380Z" level=info msg="Container to stop \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:16:57.067882 env[1319]: time="2025-08-13T00:16:57.067861260Z" level=info msg="Container to stop \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:16:57.089883 env[1319]: time="2025-08-13T00:16:57.089825490Z" level=info msg="shim disconnected" id=a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb Aug 13 00:16:57.089883 env[1319]: time="2025-08-13T00:16:57.089876290Z" level=warning msg="cleaning up after shim disconnected" id=a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb namespace=k8s.io Aug 13 00:16:57.089883 env[1319]: time="2025-08-13T00:16:57.089886770Z" level=info msg="cleaning up dead shim" Aug 13 00:16:57.098227 env[1319]: time="2025-08-13T00:16:57.098173162Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:16:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3950 runtime=io.containerd.runc.v2\n" Aug 13 00:16:57.098544 env[1319]: time="2025-08-13T00:16:57.098503725Z" level=info msg="TearDown network for sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" successfully" Aug 13 00:16:57.098544 env[1319]: time="2025-08-13T00:16:57.098532925Z" level=info msg="StopPodSandbox for \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" returns successfully" Aug 13 00:16:57.205791 kubelet[2063]: I0813 00:16:57.204774 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-config-path\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.205791 kubelet[2063]: I0813 00:16:57.204842 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-kernel\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.205791 kubelet[2063]: I0813 00:16:57.204866 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cwtw\" (UniqueName: \"kubernetes.io/projected/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-kube-api-access-4cwtw\") pod \"5bdeba23-abd0-4d90-a0b4-4e2d0d30934d\" (UID: \"5bdeba23-abd0-4d90-a0b4-4e2d0d30934d\") " Aug 13 00:16:57.205791 kubelet[2063]: I0813 00:16:57.204883 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-net\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.205791 kubelet[2063]: I0813 00:16:57.204899 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hostproc\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.205791 kubelet[2063]: I0813 00:16:57.204915 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-xtables-lock\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206282 kubelet[2063]: I0813 00:16:57.204932 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-cilium-config-path\") pod \"5bdeba23-abd0-4d90-a0b4-4e2d0d30934d\" (UID: \"5bdeba23-abd0-4d90-a0b4-4e2d0d30934d\") " Aug 13 00:16:57.206282 kubelet[2063]: I0813 00:16:57.204946 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cni-path\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206282 kubelet[2063]: I0813 00:16:57.204967 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-bpf-maps\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206282 kubelet[2063]: I0813 00:16:57.204985 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzhvk\" (UniqueName: \"kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-kube-api-access-xzhvk\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206282 kubelet[2063]: I0813 00:16:57.205001 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-etc-cni-netd\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206282 kubelet[2063]: I0813 00:16:57.205016 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-run\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206418 kubelet[2063]: I0813 00:16:57.205033 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-lib-modules\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206418 kubelet[2063]: I0813 00:16:57.205049 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hubble-tls\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206418 kubelet[2063]: I0813 00:16:57.205064 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-cgroup\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.206418 kubelet[2063]: I0813 00:16:57.205082 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-clustermesh-secrets\") pod \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\" (UID: \"d24ebe88-bea5-4d65-a553-f6f4fe520ad9\") " Aug 13 00:16:57.207620 kubelet[2063]: I0813 00:16:57.207571 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.207808 kubelet[2063]: I0813 00:16:57.207790 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.208173 kubelet[2063]: I0813 00:16:57.208147 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cni-path" (OuterVolumeSpecName: "cni-path") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.208332 kubelet[2063]: I0813 00:16:57.208315 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.208984 kubelet[2063]: I0813 00:16:57.208942 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:16:57.209074 kubelet[2063]: I0813 00:16:57.209006 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hostproc" (OuterVolumeSpecName: "hostproc") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.209074 kubelet[2063]: I0813 00:16:57.209025 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.209074 kubelet[2063]: I0813 00:16:57.209045 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.209910 kubelet[2063]: I0813 00:16:57.209858 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:16:57.209994 kubelet[2063]: I0813 00:16:57.209933 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.209994 kubelet[2063]: I0813 00:16:57.209955 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.211156 kubelet[2063]: I0813 00:16:57.211122 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:16:57.211605 kubelet[2063]: I0813 00:16:57.211551 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-kube-api-access-4cwtw" (OuterVolumeSpecName: "kube-api-access-4cwtw") pod "5bdeba23-abd0-4d90-a0b4-4e2d0d30934d" (UID: "5bdeba23-abd0-4d90-a0b4-4e2d0d30934d"). InnerVolumeSpecName "kube-api-access-4cwtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:16:57.211885 kubelet[2063]: I0813 00:16:57.211795 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-kube-api-access-xzhvk" (OuterVolumeSpecName: "kube-api-access-xzhvk") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "kube-api-access-xzhvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:16:57.211979 kubelet[2063]: I0813 00:16:57.211832 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d24ebe88-bea5-4d65-a553-f6f4fe520ad9" (UID: "d24ebe88-bea5-4d65-a553-f6f4fe520ad9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:16:57.212042 kubelet[2063]: I0813 00:16:57.211947 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5bdeba23-abd0-4d90-a0b4-4e2d0d30934d" (UID: "5bdeba23-abd0-4d90-a0b4-4e2d0d30934d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:16:57.299814 kubelet[2063]: E0813 00:16:57.299776 2063 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:16:57.306928 kubelet[2063]: I0813 00:16:57.306890 2063 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.306928 kubelet[2063]: I0813 00:16:57.306922 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.306928 kubelet[2063]: I0813 00:16:57.306930 2063 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306938 2063 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306947 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306958 2063 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306967 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306975 2063 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306983 2063 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cwtw\" (UniqueName: \"kubernetes.io/projected/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-kube-api-access-4cwtw\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306990 2063 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307084 kubelet[2063]: I0813 00:16:57.306998 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307262 kubelet[2063]: I0813 00:16:57.307005 2063 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307262 kubelet[2063]: I0813 00:16:57.307013 2063 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307262 kubelet[2063]: I0813 00:16:57.307020 2063 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307262 kubelet[2063]: I0813 00:16:57.307028 2063 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzhvk\" (UniqueName: \"kubernetes.io/projected/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-kube-api-access-xzhvk\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.307262 kubelet[2063]: I0813 00:16:57.307037 2063 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d24ebe88-bea5-4d65-a553-f6f4fe520ad9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:16:57.489995 kubelet[2063]: I0813 00:16:57.489885 2063 scope.go:117] "RemoveContainer" containerID="8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6" Aug 13 00:16:57.493281 env[1319]: time="2025-08-13T00:16:57.493209291Z" level=info msg="RemoveContainer for \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\"" Aug 13 00:16:57.497927 env[1319]: time="2025-08-13T00:16:57.497836171Z" level=info msg="RemoveContainer for \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\" returns successfully" Aug 13 00:16:57.498141 kubelet[2063]: I0813 00:16:57.498111 2063 scope.go:117] "RemoveContainer" containerID="8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6" Aug 13 00:16:57.498416 env[1319]: time="2025-08-13T00:16:57.498341495Z" level=error msg="ContainerStatus for \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\": not found" Aug 13 00:16:57.498609 kubelet[2063]: E0813 00:16:57.498577 2063 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\": not found" containerID="8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6" Aug 13 00:16:57.498694 kubelet[2063]: I0813 00:16:57.498618 2063 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6"} err="failed to get container status \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c247cfc720d5eb898de9aa1a669f2c5e229e384d5ea436e5647ebb8062bb3f6\": not found" Aug 13 00:16:57.498757 kubelet[2063]: I0813 00:16:57.498696 2063 scope.go:117] "RemoveContainer" containerID="481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4" Aug 13 00:16:57.500168 env[1319]: time="2025-08-13T00:16:57.500133790Z" level=info msg="RemoveContainer for \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\"" Aug 13 00:16:57.502980 env[1319]: time="2025-08-13T00:16:57.502943415Z" level=info msg="RemoveContainer for \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\" returns successfully" Aug 13 00:16:57.503232 kubelet[2063]: I0813 00:16:57.503203 2063 scope.go:117] "RemoveContainer" containerID="8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff" Aug 13 00:16:57.504377 env[1319]: time="2025-08-13T00:16:57.504347027Z" level=info msg="RemoveContainer for \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\"" Aug 13 00:16:57.508193 env[1319]: time="2025-08-13T00:16:57.508136500Z" level=info msg="RemoveContainer for \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\" returns successfully" Aug 13 00:16:57.508360 kubelet[2063]: I0813 00:16:57.508336 2063 scope.go:117] "RemoveContainer" containerID="d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415" Aug 13 00:16:57.510022 env[1319]: time="2025-08-13T00:16:57.509976675Z" level=info msg="RemoveContainer for \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\"" Aug 13 00:16:57.512566 env[1319]: time="2025-08-13T00:16:57.512531657Z" level=info msg="RemoveContainer for \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\" returns successfully" Aug 13 00:16:57.512943 kubelet[2063]: I0813 00:16:57.512921 2063 scope.go:117] "RemoveContainer" containerID="907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa" Aug 13 00:16:57.514338 env[1319]: time="2025-08-13T00:16:57.514305753Z" level=info msg="RemoveContainer for \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\"" Aug 13 00:16:57.517930 env[1319]: time="2025-08-13T00:16:57.517871224Z" level=info msg="RemoveContainer for \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\" returns successfully" Aug 13 00:16:57.518104 kubelet[2063]: I0813 00:16:57.518075 2063 scope.go:117] "RemoveContainer" containerID="892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372" Aug 13 00:16:57.520690 env[1319]: time="2025-08-13T00:16:57.520574247Z" level=info msg="RemoveContainer for \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\"" Aug 13 00:16:57.525407 env[1319]: time="2025-08-13T00:16:57.525354288Z" level=info msg="RemoveContainer for \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\" returns successfully" Aug 13 00:16:57.526083 kubelet[2063]: I0813 00:16:57.526048 2063 scope.go:117] "RemoveContainer" containerID="481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4" Aug 13 00:16:57.529203 env[1319]: time="2025-08-13T00:16:57.529119361Z" level=error msg="ContainerStatus for \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\": not found" Aug 13 00:16:57.529691 kubelet[2063]: E0813 00:16:57.529553 2063 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\": not found" containerID="481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4" Aug 13 00:16:57.529800 kubelet[2063]: I0813 00:16:57.529704 2063 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4"} err="failed to get container status \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4\": not found" Aug 13 00:16:57.529800 kubelet[2063]: I0813 00:16:57.529736 2063 scope.go:117] "RemoveContainer" containerID="8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff" Aug 13 00:16:57.530073 env[1319]: time="2025-08-13T00:16:57.530003248Z" level=error msg="ContainerStatus for \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\": not found" Aug 13 00:16:57.530194 kubelet[2063]: E0813 00:16:57.530166 2063 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\": not found" containerID="8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff" Aug 13 00:16:57.530238 kubelet[2063]: I0813 00:16:57.530195 2063 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff"} err="failed to get container status \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cf9a379dc388e679c569f0e40e87ff595cc09958e856da20eac7b40825150ff\": not found" Aug 13 00:16:57.530238 kubelet[2063]: I0813 00:16:57.530227 2063 scope.go:117] "RemoveContainer" containerID="d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415" Aug 13 00:16:57.530448 env[1319]: time="2025-08-13T00:16:57.530391892Z" level=error msg="ContainerStatus for \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\": not found" Aug 13 00:16:57.530564 kubelet[2063]: E0813 00:16:57.530531 2063 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\": not found" containerID="d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415" Aug 13 00:16:57.530999 kubelet[2063]: I0813 00:16:57.530957 2063 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415"} err="failed to get container status \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0e53f95ef4db7666491f41615107925bd8763aa1db707b33f7e13ba737d8415\": not found" Aug 13 00:16:57.530999 kubelet[2063]: I0813 00:16:57.530991 2063 scope.go:117] "RemoveContainer" containerID="907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa" Aug 13 00:16:57.533054 env[1319]: time="2025-08-13T00:16:57.532979274Z" level=error msg="ContainerStatus for \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\": not found" Aug 13 00:16:57.533268 kubelet[2063]: E0813 00:16:57.533222 2063 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\": not found" containerID="907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa" Aug 13 00:16:57.533338 kubelet[2063]: I0813 00:16:57.533269 2063 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa"} err="failed to get container status \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"907e661b7e1734cc8c13f414b41541c54a29f5652c30b1de5a202c0cc0d7d1aa\": not found" Aug 13 00:16:57.533338 kubelet[2063]: I0813 00:16:57.533294 2063 scope.go:117] "RemoveContainer" containerID="892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372" Aug 13 00:16:57.533834 env[1319]: time="2025-08-13T00:16:57.533778801Z" level=error msg="ContainerStatus for \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\": not found" Aug 13 00:16:57.533999 kubelet[2063]: E0813 00:16:57.533966 2063 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\": not found" containerID="892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372" Aug 13 00:16:57.534046 kubelet[2063]: I0813 00:16:57.534003 2063 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372"} err="failed to get container status \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\": rpc error: code = NotFound desc = an error occurred when try to find container \"892bae4fd226b09af1a849781a7ab91d59c8b666adb65a93a9bf26e8187de372\": not found" Aug 13 00:16:57.921861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-481d7e346989621a295f96114608336b48bf4bfe53a46fb3c2e7be0c1bf44cb4-rootfs.mount: Deactivated successfully. Aug 13 00:16:57.922012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb-rootfs.mount: Deactivated successfully. Aug 13 00:16:57.922097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb-shm.mount: Deactivated successfully. Aug 13 00:16:57.922192 systemd[1]: var-lib-kubelet-pods-5bdeba23\x2dabd0\x2d4d90\x2da0b4\x2d4e2d0d30934d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4cwtw.mount: Deactivated successfully. Aug 13 00:16:57.922274 systemd[1]: var-lib-kubelet-pods-d24ebe88\x2dbea5\x2d4d65\x2da553\x2df6f4fe520ad9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzhvk.mount: Deactivated successfully. Aug 13 00:16:57.922373 systemd[1]: var-lib-kubelet-pods-d24ebe88\x2dbea5\x2d4d65\x2da553\x2df6f4fe520ad9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:16:57.922455 systemd[1]: var-lib-kubelet-pods-d24ebe88\x2dbea5\x2d4d65\x2da553\x2df6f4fe520ad9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:16:58.222959 kubelet[2063]: I0813 00:16:58.222862 2063 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bdeba23-abd0-4d90-a0b4-4e2d0d30934d" path="/var/lib/kubelet/pods/5bdeba23-abd0-4d90-a0b4-4e2d0d30934d/volumes" Aug 13 00:16:58.223766 kubelet[2063]: I0813 00:16:58.223716 2063 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24ebe88-bea5-4d65-a553-f6f4fe520ad9" path="/var/lib/kubelet/pods/d24ebe88-bea5-4d65-a553-f6f4fe520ad9/volumes" Aug 13 00:16:58.857468 sshd[3798]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:58.861003 systemd[1]: Started sshd@27-10.0.0.125:22-10.0.0.1:44920.service. Aug 13 00:16:58.861681 systemd[1]: sshd@26-10.0.0.125:22-10.0.0.1:44918.service: Deactivated successfully. Aug 13 00:16:58.863229 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:16:58.863331 systemd-logind[1303]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:16:58.864386 systemd-logind[1303]: Removed session 27. Aug 13 00:16:58.910355 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 44920 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:16:58.913139 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:16:58.918136 systemd-logind[1303]: New session 28 of user core. Aug 13 00:16:58.918865 systemd[1]: Started session-28.scope. Aug 13 00:17:00.087659 sshd[3968]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:00.093048 systemd[1]: Started sshd@28-10.0.0.125:22-10.0.0.1:44934.service. Aug 13 00:17:00.096594 systemd[1]: sshd@27-10.0.0.125:22-10.0.0.1:44920.service: Deactivated successfully. Aug 13 00:17:00.097618 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:17:00.100262 systemd-logind[1303]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:17:00.101116 systemd-logind[1303]: Removed session 28. Aug 13 00:17:00.130000 kubelet[2063]: E0813 00:17:00.129694 2063 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d24ebe88-bea5-4d65-a553-f6f4fe520ad9" containerName="apply-sysctl-overwrites" Aug 13 00:17:00.130000 kubelet[2063]: E0813 00:17:00.129731 2063 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5bdeba23-abd0-4d90-a0b4-4e2d0d30934d" containerName="cilium-operator" Aug 13 00:17:00.130000 kubelet[2063]: E0813 00:17:00.129740 2063 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d24ebe88-bea5-4d65-a553-f6f4fe520ad9" containerName="mount-bpf-fs" Aug 13 00:17:00.130000 kubelet[2063]: E0813 00:17:00.129763 2063 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d24ebe88-bea5-4d65-a553-f6f4fe520ad9" containerName="cilium-agent" Aug 13 00:17:00.130000 kubelet[2063]: E0813 00:17:00.129769 2063 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d24ebe88-bea5-4d65-a553-f6f4fe520ad9" containerName="mount-cgroup" Aug 13 00:17:00.130000 kubelet[2063]: E0813 00:17:00.129777 2063 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d24ebe88-bea5-4d65-a553-f6f4fe520ad9" containerName="clean-cilium-state" Aug 13 00:17:00.130000 kubelet[2063]: I0813 00:17:00.129801 2063 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdeba23-abd0-4d90-a0b4-4e2d0d30934d" containerName="cilium-operator" Aug 13 00:17:00.130000 kubelet[2063]: I0813 00:17:00.129809 2063 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24ebe88-bea5-4d65-a553-f6f4fe520ad9" containerName="cilium-agent" Aug 13 00:17:00.139057 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:17:00.143372 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:17:00.154052 systemd[1]: Started session-29.scope. Aug 13 00:17:00.154267 systemd-logind[1303]: New session 29 of user core. Aug 13 00:17:00.230243 kubelet[2063]: I0813 00:17:00.230198 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cni-path\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230243 kubelet[2063]: I0813 00:17:00.230247 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-clustermesh-secrets\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230429 kubelet[2063]: I0813 00:17:00.230274 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-cgroup\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230429 kubelet[2063]: I0813 00:17:00.230293 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-config-path\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230429 kubelet[2063]: I0813 00:17:00.230337 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-kernel\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230429 kubelet[2063]: I0813 00:17:00.230358 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-bpf-maps\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230429 kubelet[2063]: I0813 00:17:00.230374 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hostproc\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230563 kubelet[2063]: I0813 00:17:00.230442 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-etc-cni-netd\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230563 kubelet[2063]: I0813 00:17:00.230480 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-ipsec-secrets\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230563 kubelet[2063]: I0813 00:17:00.230515 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hubble-tls\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230563 kubelet[2063]: I0813 00:17:00.230539 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-run\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230563 kubelet[2063]: I0813 00:17:00.230562 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-xtables-lock\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230730 kubelet[2063]: I0813 00:17:00.230580 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-net\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230730 kubelet[2063]: I0813 00:17:00.230604 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtq4w\" (UniqueName: \"kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-kube-api-access-wtq4w\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.230730 kubelet[2063]: I0813 00:17:00.230626 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-lib-modules\") pod \"cilium-dgqlp\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " pod="kube-system/cilium-dgqlp" Aug 13 00:17:00.287042 sshd[3980]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:00.289929 systemd[1]: Started sshd@29-10.0.0.125:22-10.0.0.1:44946.service. Aug 13 00:17:00.292735 systemd[1]: sshd@28-10.0.0.125:22-10.0.0.1:44934.service: Deactivated successfully. Aug 13 00:17:00.295016 systemd-logind[1303]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:17:00.295098 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:17:00.295988 systemd-logind[1303]: Removed session 29. Aug 13 00:17:00.300318 kubelet[2063]: E0813 00:17:00.300003 2063 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-wtq4w lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-dgqlp" podUID="5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" Aug 13 00:17:00.339827 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 44946 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:17:00.341668 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:17:00.351666 systemd-logind[1303]: New session 30 of user core. Aug 13 00:17:00.352705 systemd[1]: Started session-30.scope. Aug 13 00:17:00.633200 kubelet[2063]: I0813 00:17:00.633085 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-cgroup\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633377 kubelet[2063]: I0813 00:17:00.633221 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.633432 kubelet[2063]: I0813 00:17:00.633357 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-clustermesh-secrets\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633432 kubelet[2063]: I0813 00:17:00.633418 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hubble-tls\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633487 kubelet[2063]: I0813 00:17:00.633438 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cni-path\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633487 kubelet[2063]: I0813 00:17:00.633466 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-config-path\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633487 kubelet[2063]: I0813 00:17:00.633485 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtq4w\" (UniqueName: \"kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-kube-api-access-wtq4w\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633577 kubelet[2063]: I0813 00:17:00.633502 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-bpf-maps\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633577 kubelet[2063]: I0813 00:17:00.633517 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-run\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633577 kubelet[2063]: I0813 00:17:00.633532 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-xtables-lock\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633577 kubelet[2063]: I0813 00:17:00.633546 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-net\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633577 kubelet[2063]: I0813 00:17:00.633562 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-kernel\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.633577 kubelet[2063]: I0813 00:17:00.633575 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hostproc\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.634363 kubelet[2063]: I0813 00:17:00.633588 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-etc-cni-netd\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.634363 kubelet[2063]: I0813 00:17:00.633611 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-lib-modules\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.634363 kubelet[2063]: I0813 00:17:00.633632 2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-ipsec-secrets\") pod \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\" (UID: \"5ce164bc-f3f3-4311-8f86-bea0d6f02ec9\") " Aug 13 00:17:00.634363 kubelet[2063]: I0813 00:17:00.633668 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.634363 kubelet[2063]: I0813 00:17:00.633936 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.634363 kubelet[2063]: I0813 00:17:00.633965 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.635722 kubelet[2063]: I0813 00:17:00.635675 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:17:00.642537 kubelet[2063]: I0813 00:17:00.635857 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.642537 kubelet[2063]: I0813 00:17:00.635897 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.642537 kubelet[2063]: I0813 00:17:00.635916 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.642537 kubelet[2063]: I0813 00:17:00.635929 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.642537 kubelet[2063]: I0813 00:17:00.635943 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.637939 systemd[1]: var-lib-kubelet-pods-5ce164bc\x2df3f3\x2d4311\x2d8f86\x2dbea0d6f02ec9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:17:00.642736 kubelet[2063]: I0813 00:17:00.635960 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.642736 kubelet[2063]: I0813 00:17:00.635973 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:17:00.642736 kubelet[2063]: I0813 00:17:00.636445 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:17:00.642736 kubelet[2063]: I0813 00:17:00.636533 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:17:00.642736 kubelet[2063]: I0813 00:17:00.639171 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:17:00.638087 systemd[1]: var-lib-kubelet-pods-5ce164bc\x2df3f3\x2d4311\x2d8f86\x2dbea0d6f02ec9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:17:00.642918 kubelet[2063]: I0813 00:17:00.639268 2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-kube-api-access-wtq4w" (OuterVolumeSpecName: "kube-api-access-wtq4w") pod "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" (UID: "5ce164bc-f3f3-4311-8f86-bea0d6f02ec9"). InnerVolumeSpecName "kube-api-access-wtq4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:17:00.641040 systemd[1]: var-lib-kubelet-pods-5ce164bc\x2df3f3\x2d4311\x2d8f86\x2dbea0d6f02ec9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwtq4w.mount: Deactivated successfully. Aug 13 00:17:00.641265 systemd[1]: var-lib-kubelet-pods-5ce164bc\x2df3f3\x2d4311\x2d8f86\x2dbea0d6f02ec9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:17:00.734833 kubelet[2063]: I0813 00:17:00.734780 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.734833 kubelet[2063]: I0813 00:17:00.734813 2063 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.734833 kubelet[2063]: I0813 00:17:00.734839 2063 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.734833 kubelet[2063]: I0813 00:17:00.734849 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734860 2063 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtq4w\" (UniqueName: \"kubernetes.io/projected/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-kube-api-access-wtq4w\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734869 2063 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734877 2063 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734885 2063 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734895 2063 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734903 2063 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734911 2063 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735067 kubelet[2063]: I0813 00:17:00.734919 2063 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735252 kubelet[2063]: I0813 00:17:00.734928 2063 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:00.735252 kubelet[2063]: I0813 00:17:00.734936 2063 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:17:01.641386 kubelet[2063]: I0813 00:17:01.641338 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-cilium-cgroup\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641386 kubelet[2063]: I0813 00:17:01.641383 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-xtables-lock\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641805 kubelet[2063]: I0813 00:17:01.641405 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gzm\" (UniqueName: \"kubernetes.io/projected/8f6cf923-7855-4d0f-b868-0d77370dcb70-kube-api-access-88gzm\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641805 kubelet[2063]: I0813 00:17:01.641429 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-hostproc\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641805 kubelet[2063]: I0813 00:17:01.641454 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f6cf923-7855-4d0f-b868-0d77370dcb70-cilium-ipsec-secrets\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641805 kubelet[2063]: I0813 00:17:01.641472 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-host-proc-sys-kernel\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641805 kubelet[2063]: I0813 00:17:01.641492 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-lib-modules\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641943 kubelet[2063]: I0813 00:17:01.641511 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-host-proc-sys-net\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641943 kubelet[2063]: I0813 00:17:01.641527 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-bpf-maps\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641943 kubelet[2063]: I0813 00:17:01.641547 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-cilium-run\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641943 kubelet[2063]: I0813 00:17:01.641561 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-cni-path\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641943 kubelet[2063]: I0813 00:17:01.641576 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f6cf923-7855-4d0f-b868-0d77370dcb70-etc-cni-netd\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.641943 kubelet[2063]: I0813 00:17:01.641591 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f6cf923-7855-4d0f-b868-0d77370dcb70-clustermesh-secrets\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.642068 kubelet[2063]: I0813 00:17:01.641629 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f6cf923-7855-4d0f-b868-0d77370dcb70-cilium-config-path\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.642068 kubelet[2063]: I0813 00:17:01.641657 2063 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f6cf923-7855-4d0f-b868-0d77370dcb70-hubble-tls\") pod \"cilium-8xcgr\" (UID: \"8f6cf923-7855-4d0f-b868-0d77370dcb70\") " pod="kube-system/cilium-8xcgr" Aug 13 00:17:01.844830 kubelet[2063]: E0813 00:17:01.844737 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:01.846089 env[1319]: time="2025-08-13T00:17:01.846047651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xcgr,Uid:8f6cf923-7855-4d0f-b868-0d77370dcb70,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:01.857461 env[1319]: time="2025-08-13T00:17:01.857201625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:01.857566 env[1319]: time="2025-08-13T00:17:01.857476868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:01.857566 env[1319]: time="2025-08-13T00:17:01.857504268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:01.857809 env[1319]: time="2025-08-13T00:17:01.857756550Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759 pid=4028 runtime=io.containerd.runc.v2 Aug 13 00:17:01.907386 env[1319]: time="2025-08-13T00:17:01.907340089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8xcgr,Uid:8f6cf923-7855-4d0f-b868-0d77370dcb70,Namespace:kube-system,Attempt:0,} returns sandbox id \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\"" Aug 13 00:17:01.908378 kubelet[2063]: E0813 00:17:01.908156 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:01.911660 env[1319]: time="2025-08-13T00:17:01.910913399Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:17:01.923573 env[1319]: time="2025-08-13T00:17:01.923500625Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b2dce194c6ddc392719f83f2d85e8b8ca21da31abb42a470322924d393941c4e\"" Aug 13 00:17:01.925568 env[1319]: time="2025-08-13T00:17:01.925519642Z" level=info msg="StartContainer for \"b2dce194c6ddc392719f83f2d85e8b8ca21da31abb42a470322924d393941c4e\"" Aug 13 00:17:01.993372 env[1319]: time="2025-08-13T00:17:01.993312095Z" level=info msg="StartContainer for \"b2dce194c6ddc392719f83f2d85e8b8ca21da31abb42a470322924d393941c4e\" returns successfully" Aug 13 00:17:02.026583 env[1319]: time="2025-08-13T00:17:02.026535814Z" level=info msg="shim disconnected" id=b2dce194c6ddc392719f83f2d85e8b8ca21da31abb42a470322924d393941c4e Aug 13 00:17:02.026900 env[1319]: time="2025-08-13T00:17:02.026877457Z" level=warning msg="cleaning up after shim disconnected" id=b2dce194c6ddc392719f83f2d85e8b8ca21da31abb42a470322924d393941c4e namespace=k8s.io Aug 13 00:17:02.026973 env[1319]: time="2025-08-13T00:17:02.026958018Z" level=info msg="cleaning up dead shim" Aug 13 00:17:02.034480 env[1319]: time="2025-08-13T00:17:02.034433881Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:17:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4113 runtime=io.containerd.runc.v2\n" Aug 13 00:17:02.222814 kubelet[2063]: I0813 00:17:02.222688 2063 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce164bc-f3f3-4311-8f86-bea0d6f02ec9" path="/var/lib/kubelet/pods/5ce164bc-f3f3-4311-8f86-bea0d6f02ec9/volumes" Aug 13 00:17:02.301432 kubelet[2063]: E0813 00:17:02.301380 2063 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:17:02.508341 kubelet[2063]: E0813 00:17:02.508229 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:02.515378 env[1319]: time="2025-08-13T00:17:02.515334601Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:17:02.533673 env[1319]: time="2025-08-13T00:17:02.533615594Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b82724efaba0c556b69fc2986346dcf693c8e82bc9290b20d37b1c958fdef3be\"" Aug 13 00:17:02.535435 env[1319]: time="2025-08-13T00:17:02.534138519Z" level=info msg="StartContainer for \"b82724efaba0c556b69fc2986346dcf693c8e82bc9290b20d37b1c958fdef3be\"" Aug 13 00:17:02.587333 env[1319]: time="2025-08-13T00:17:02.587277125Z" level=info msg="StartContainer for \"b82724efaba0c556b69fc2986346dcf693c8e82bc9290b20d37b1c958fdef3be\" returns successfully" Aug 13 00:17:02.608559 env[1319]: time="2025-08-13T00:17:02.608510144Z" level=info msg="shim disconnected" id=b82724efaba0c556b69fc2986346dcf693c8e82bc9290b20d37b1c958fdef3be Aug 13 00:17:02.608559 env[1319]: time="2025-08-13T00:17:02.608559584Z" level=warning msg="cleaning up after shim disconnected" id=b82724efaba0c556b69fc2986346dcf693c8e82bc9290b20d37b1c958fdef3be namespace=k8s.io Aug 13 00:17:02.608796 env[1319]: time="2025-08-13T00:17:02.608570264Z" level=info msg="cleaning up dead shim" Aug 13 00:17:02.622352 env[1319]: time="2025-08-13T00:17:02.622308300Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:17:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4175 runtime=io.containerd.runc.v2\n" Aug 13 00:17:03.512407 kubelet[2063]: E0813 00:17:03.512377 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:03.515041 env[1319]: time="2025-08-13T00:17:03.514982058Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:17:03.559404 env[1319]: time="2025-08-13T00:17:03.559349069Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a19413bc7730b800f3eb61dbd30ebf58e7ac2a52216b011ae9c153fccb4e3c51\"" Aug 13 00:17:03.561112 env[1319]: time="2025-08-13T00:17:03.561073643Z" level=info msg="StartContainer for \"a19413bc7730b800f3eb61dbd30ebf58e7ac2a52216b011ae9c153fccb4e3c51\"" Aug 13 00:17:03.627526 env[1319]: time="2025-08-13T00:17:03.627478479Z" level=info msg="StartContainer for \"a19413bc7730b800f3eb61dbd30ebf58e7ac2a52216b011ae9c153fccb4e3c51\" returns successfully" Aug 13 00:17:03.651702 env[1319]: time="2025-08-13T00:17:03.651598600Z" level=info msg="shim disconnected" id=a19413bc7730b800f3eb61dbd30ebf58e7ac2a52216b011ae9c153fccb4e3c51 Aug 13 00:17:03.651702 env[1319]: time="2025-08-13T00:17:03.651654681Z" level=warning msg="cleaning up after shim disconnected" id=a19413bc7730b800f3eb61dbd30ebf58e7ac2a52216b011ae9c153fccb4e3c51 namespace=k8s.io Aug 13 00:17:03.651702 env[1319]: time="2025-08-13T00:17:03.651666401Z" level=info msg="cleaning up dead shim" Aug 13 00:17:03.668893 env[1319]: time="2025-08-13T00:17:03.668833464Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:17:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4232 runtime=io.containerd.runc.v2\n" Aug 13 00:17:03.747493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a19413bc7730b800f3eb61dbd30ebf58e7ac2a52216b011ae9c153fccb4e3c51-rootfs.mount: Deactivated successfully. Aug 13 00:17:04.527067 kubelet[2063]: E0813 00:17:04.527025 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:04.543478 env[1319]: time="2025-08-13T00:17:04.543427794Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:17:04.555483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457145842.mount: Deactivated successfully. Aug 13 00:17:04.560021 env[1319]: time="2025-08-13T00:17:04.559956492Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"471168dc5096328718e883f50a3b3f0f4e5884e50cd79998cfadf33011dfb74d\"" Aug 13 00:17:04.561740 env[1319]: time="2025-08-13T00:17:04.561704507Z" level=info msg="StartContainer for \"471168dc5096328718e883f50a3b3f0f4e5884e50cd79998cfadf33011dfb74d\"" Aug 13 00:17:04.608724 env[1319]: time="2025-08-13T00:17:04.608666097Z" level=info msg="StartContainer for \"471168dc5096328718e883f50a3b3f0f4e5884e50cd79998cfadf33011dfb74d\" returns successfully" Aug 13 00:17:04.625842 env[1319]: time="2025-08-13T00:17:04.625789440Z" level=info msg="shim disconnected" id=471168dc5096328718e883f50a3b3f0f4e5884e50cd79998cfadf33011dfb74d Aug 13 00:17:04.625842 env[1319]: time="2025-08-13T00:17:04.625839000Z" level=warning msg="cleaning up after shim disconnected" id=471168dc5096328718e883f50a3b3f0f4e5884e50cd79998cfadf33011dfb74d namespace=k8s.io Aug 13 00:17:04.625842 env[1319]: time="2025-08-13T00:17:04.625848640Z" level=info msg="cleaning up dead shim" Aug 13 00:17:04.633150 env[1319]: time="2025-08-13T00:17:04.633096301Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4286 runtime=io.containerd.runc.v2\n" Aug 13 00:17:04.747575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471168dc5096328718e883f50a3b3f0f4e5884e50cd79998cfadf33011dfb74d-rootfs.mount: Deactivated successfully. Aug 13 00:17:04.792816 kubelet[2063]: I0813 00:17:04.792686 2063 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:17:04Z","lastTransitionTime":"2025-08-13T00:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:17:05.531429 kubelet[2063]: E0813 00:17:05.531379 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:05.534487 env[1319]: time="2025-08-13T00:17:05.534298658Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:17:05.553420 env[1319]: time="2025-08-13T00:17:05.553370376Z" level=info msg="CreateContainer within sandbox \"67e1c38f0f99126bfe14049fb4d6a45d1b61f13ad0780bac3fb853b8eef30759\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b5d64de3ae1222cfa1cc446e1b6372c57563b5f471c3facd6bf83acdc5d7e312\"" Aug 13 00:17:05.554378 env[1319]: time="2025-08-13T00:17:05.554335624Z" level=info msg="StartContainer for \"b5d64de3ae1222cfa1cc446e1b6372c57563b5f471c3facd6bf83acdc5d7e312\"" Aug 13 00:17:05.612921 env[1319]: time="2025-08-13T00:17:05.612872309Z" level=info msg="StartContainer for \"b5d64de3ae1222cfa1cc446e1b6372c57563b5f471c3facd6bf83acdc5d7e312\" returns successfully" Aug 13 00:17:05.873815 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Aug 13 00:17:06.535701 kubelet[2063]: E0813 00:17:06.535663 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:06.552501 kubelet[2063]: I0813 00:17:06.552394 2063 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8xcgr" podStartSLOduration=5.552376948 podStartE2EDuration="5.552376948s" podCreationTimestamp="2025-08-13 00:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:17:06.552028785 +0000 UTC m=+114.410322817" watchObservedRunningTime="2025-08-13 00:17:06.552376948 +0000 UTC m=+114.410670980" Aug 13 00:17:07.846181 kubelet[2063]: E0813 00:17:07.846147 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:08.858766 systemd-networkd[1097]: lxc_health: Link UP Aug 13 00:17:08.865959 systemd-networkd[1097]: lxc_health: Gained carrier Aug 13 00:17:08.866764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:17:09.846349 kubelet[2063]: E0813 00:17:09.846312 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:10.167942 systemd-networkd[1097]: lxc_health: Gained IPv6LL Aug 13 00:17:10.542365 kubelet[2063]: E0813 00:17:10.542257 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:11.543638 kubelet[2063]: E0813 00:17:11.543584 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:12.234807 env[1319]: time="2025-08-13T00:17:12.234765061Z" level=info msg="StopPodSandbox for \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\"" Aug 13 00:17:12.235176 env[1319]: time="2025-08-13T00:17:12.234867302Z" level=info msg="TearDown network for sandbox \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" successfully" Aug 13 00:17:12.235176 env[1319]: time="2025-08-13T00:17:12.234901822Z" level=info msg="StopPodSandbox for \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" returns successfully" Aug 13 00:17:12.235306 env[1319]: time="2025-08-13T00:17:12.235272545Z" level=info msg="RemovePodSandbox for \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\"" Aug 13 00:17:12.235351 env[1319]: time="2025-08-13T00:17:12.235311145Z" level=info msg="Forcibly stopping sandbox \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\"" Aug 13 00:17:12.235401 env[1319]: time="2025-08-13T00:17:12.235384386Z" level=info msg="TearDown network for sandbox \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" successfully" Aug 13 00:17:12.249063 env[1319]: time="2025-08-13T00:17:12.249007975Z" level=info msg="RemovePodSandbox \"038682fd4318367ffec6bb18c15915e317b2b2c79a5c0b77c422bb6fe75d4def\" returns successfully" Aug 13 00:17:12.249586 env[1319]: time="2025-08-13T00:17:12.249551620Z" level=info msg="StopPodSandbox for \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\"" Aug 13 00:17:12.249692 env[1319]: time="2025-08-13T00:17:12.249644940Z" level=info msg="TearDown network for sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" successfully" Aug 13 00:17:12.249734 env[1319]: time="2025-08-13T00:17:12.249692341Z" level=info msg="StopPodSandbox for \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" returns successfully" Aug 13 00:17:12.250039 env[1319]: time="2025-08-13T00:17:12.250014023Z" level=info msg="RemovePodSandbox for \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\"" Aug 13 00:17:12.250079 env[1319]: time="2025-08-13T00:17:12.250046944Z" level=info msg="Forcibly stopping sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\"" Aug 13 00:17:12.250132 env[1319]: time="2025-08-13T00:17:12.250116024Z" level=info msg="TearDown network for sandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" successfully" Aug 13 00:17:12.252752 env[1319]: time="2025-08-13T00:17:12.252709205Z" level=info msg="RemovePodSandbox \"a647006a951cedf7794e782d93252c6531f30f8346ae4a75499e742dea1931eb\" returns successfully" Aug 13 00:17:14.221317 kubelet[2063]: E0813 00:17:14.221279 2063 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:17:15.188653 sshd[3996]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:15.190962 systemd[1]: sshd@29-10.0.0.125:22-10.0.0.1:44946.service: Deactivated successfully. Aug 13 00:17:15.192070 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:17:15.192071 systemd-logind[1303]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:17:15.192967 systemd-logind[1303]: Removed session 30.