Aug 13 00:03:27.745996 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:03:27.746016 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Aug 12 22:50:30 -00 2025 Aug 13 00:03:27.746024 kernel: efi: EFI v2.70 by EDK II Aug 13 00:03:27.746029 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Aug 13 00:03:27.746034 kernel: random: crng init done Aug 13 00:03:27.746039 kernel: ACPI: Early table checksum verification disabled Aug 13 00:03:27.746046 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Aug 13 00:03:27.746052 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:03:27.746058 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746063 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746069 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746075 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746080 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746086 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746093 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746099 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746105 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:03:27.746111 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 13 00:03:27.746117 kernel: NUMA: Failed to initialise from firmware Aug 13 00:03:27.746122 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:03:27.746128 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Aug 13 00:03:27.746134 kernel: Zone ranges: Aug 13 00:03:27.746139 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:03:27.746146 kernel: DMA32 empty Aug 13 00:03:27.746152 kernel: Normal empty Aug 13 00:03:27.746157 kernel: Movable zone start for each node Aug 13 00:03:27.746163 kernel: Early memory node ranges Aug 13 00:03:27.746168 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Aug 13 00:03:27.746174 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Aug 13 00:03:27.746180 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Aug 13 00:03:27.746185 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Aug 13 00:03:27.746191 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Aug 13 00:03:27.746197 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Aug 13 00:03:27.746207 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Aug 13 00:03:27.746217 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:03:27.746224 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 13 00:03:27.746230 kernel: psci: probing for conduit method from ACPI. Aug 13 00:03:27.746236 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:03:27.746241 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:03:27.746247 kernel: psci: Trusted OS migration not required Aug 13 00:03:27.746255 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:03:27.746261 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:03:27.746268 kernel: ACPI: SRAT not present Aug 13 00:03:27.746274 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Aug 13 00:03:27.746280 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Aug 13 00:03:27.746286 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 13 00:03:27.746292 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:03:27.746298 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:03:27.746304 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:03:27.746310 kernel: CPU features: detected: Spectre-v4 Aug 13 00:03:27.746316 kernel: CPU features: detected: Spectre-BHB Aug 13 00:03:27.746323 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:03:27.746329 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:03:27.746335 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:03:27.746341 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:03:27.746347 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 13 00:03:27.746353 kernel: Policy zone: DMA Aug 13 00:03:27.746360 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:03:27.746366 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:03:27.746372 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:03:27.746379 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:03:27.746385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:03:27.746392 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Aug 13 00:03:27.746398 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:03:27.746404 kernel: trace event string verifier disabled Aug 13 00:03:27.746410 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:03:27.746417 kernel: rcu: RCU event tracing is enabled. Aug 13 00:03:27.746423 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:03:27.746429 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:03:27.746444 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:03:27.746450 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:03:27.746456 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:03:27.746462 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:03:27.746470 kernel: GICv3: 256 SPIs implemented Aug 13 00:03:27.746476 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:03:27.746481 kernel: GICv3: Distributor has no Range Selector support Aug 13 00:03:27.746487 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:03:27.746493 kernel: GICv3: 16 PPIs implemented Aug 13 00:03:27.746499 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:03:27.746505 kernel: ACPI: SRAT not present Aug 13 00:03:27.746511 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:03:27.746517 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:03:27.746523 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:03:27.746529 kernel: GICv3: using LPI property table @0x00000000400d0000 Aug 13 00:03:27.746535 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Aug 13 00:03:27.746542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:03:27.746548 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:03:27.746554 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:03:27.746572 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:03:27.746578 kernel: arm-pv: using stolen time PV Aug 13 00:03:27.746585 kernel: Console: colour dummy device 80x25 Aug 13 00:03:27.746591 kernel: ACPI: Core revision 20210730 Aug 13 00:03:27.746597 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:03:27.746604 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:03:27.746610 kernel: LSM: Security Framework initializing Aug 13 00:03:27.746625 kernel: SELinux: Initializing. Aug 13 00:03:27.746631 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:03:27.746639 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:03:27.746646 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:03:27.746652 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:03:27.746658 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:03:27.746665 kernel: Remapping and enabling EFI services. Aug 13 00:03:27.746671 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:03:27.746677 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:03:27.746685 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:03:27.746699 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Aug 13 00:03:27.746705 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:03:27.746712 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:03:27.746718 kernel: Detected PIPT I-cache on CPU2 Aug 13 00:03:27.746725 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 13 00:03:27.746731 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Aug 13 00:03:27.746738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:03:27.746744 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 13 00:03:27.746750 kernel: Detected PIPT I-cache on CPU3 Aug 13 00:03:27.746758 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 13 00:03:27.746764 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Aug 13 00:03:27.746770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:03:27.746776 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 13 00:03:27.746787 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:03:27.746794 kernel: SMP: Total of 4 processors activated. Aug 13 00:03:27.746801 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:03:27.746808 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:03:27.746823 kernel: CPU features: detected: Common not Private translations Aug 13 00:03:27.746829 kernel: CPU features: detected: CRC32 instructions Aug 13 00:03:27.746836 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:03:27.746842 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:03:27.746860 kernel: CPU features: detected: Privileged Access Never Aug 13 00:03:27.746866 kernel: CPU features: detected: RAS Extension Support Aug 13 00:03:27.746873 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:03:27.746886 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:03:27.746892 kernel: alternatives: patching kernel code Aug 13 00:03:27.746907 kernel: devtmpfs: initialized Aug 13 00:03:27.746914 kernel: KASLR enabled Aug 13 00:03:27.746921 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:03:27.746928 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:03:27.746934 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:03:27.746941 kernel: SMBIOS 3.0.0 present. Aug 13 00:03:27.746947 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Aug 13 00:03:27.746954 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:03:27.746960 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:03:27.746968 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:03:27.746975 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:03:27.746982 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:03:27.746988 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Aug 13 00:03:27.746995 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:03:27.747001 kernel: cpuidle: using governor menu Aug 13 00:03:27.747008 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:03:27.747014 kernel: ASID allocator initialised with 32768 entries Aug 13 00:03:27.747021 kernel: ACPI: bus type PCI registered Aug 13 00:03:27.747028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:03:27.747035 kernel: Serial: AMBA PL011 UART driver Aug 13 00:03:27.747059 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:03:27.747066 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:03:27.747072 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:03:27.747079 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:03:27.747085 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:03:27.747092 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:03:27.747099 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:03:27.747106 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:03:27.747113 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:03:27.747119 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:03:27.747125 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:03:27.747132 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:03:27.747138 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:03:27.747145 kernel: ACPI: Interpreter enabled Aug 13 00:03:27.747151 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:03:27.747158 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:03:27.747165 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:03:27.747172 kernel: printk: console [ttyAMA0] enabled Aug 13 00:03:27.747178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:03:27.755701 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:03:27.755805 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:03:27.755874 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:03:27.755934 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:03:27.756002 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:03:27.756012 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:03:27.756019 kernel: PCI host bridge to bus 0000:00 Aug 13 00:03:27.756088 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:03:27.756143 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:03:27.756195 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:03:27.756246 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:03:27.756320 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:03:27.756390 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:03:27.756454 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 13 00:03:27.756516 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 13 00:03:27.756592 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:03:27.756655 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:03:27.756726 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 13 00:03:27.758659 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 13 00:03:27.758762 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:03:27.758821 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:03:27.758875 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:03:27.758885 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:03:27.758892 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:03:27.758899 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:03:27.758913 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:03:27.758920 kernel: iommu: Default domain type: Translated Aug 13 00:03:27.758927 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:03:27.758933 kernel: vgaarb: loaded Aug 13 00:03:27.758940 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:03:27.758947 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:03:27.758954 kernel: PTP clock support registered Aug 13 00:03:27.758961 kernel: Registered efivars operations Aug 13 00:03:27.758968 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:03:27.758975 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:03:27.758985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:03:27.758991 kernel: pnp: PnP ACPI init Aug 13 00:03:27.759059 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:03:27.759070 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:03:27.759077 kernel: NET: Registered PF_INET protocol family Aug 13 00:03:27.759085 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:03:27.759091 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:03:27.759098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:03:27.759107 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:03:27.759114 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:03:27.759120 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:03:27.759127 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:03:27.759134 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:03:27.759141 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:03:27.759147 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:03:27.759154 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:03:27.759161 kernel: kvm [1]: HYP mode not available Aug 13 00:03:27.759169 kernel: Initialise system trusted keyrings Aug 13 00:03:27.759176 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:03:27.759182 kernel: Key type asymmetric registered Aug 13 00:03:27.759189 kernel: Asymmetric key parser 'x509' registered Aug 13 00:03:27.759196 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:03:27.759202 kernel: io scheduler mq-deadline registered Aug 13 00:03:27.759209 kernel: io scheduler kyber registered Aug 13 00:03:27.759216 kernel: io scheduler bfq registered Aug 13 00:03:27.759223 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:03:27.759231 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:03:27.759238 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:03:27.759301 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 13 00:03:27.759310 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:03:27.759317 kernel: thunder_xcv, ver 1.0 Aug 13 00:03:27.759324 kernel: thunder_bgx, ver 1.0 Aug 13 00:03:27.759331 kernel: nicpf, ver 1.0 Aug 13 00:03:27.759337 kernel: nicvf, ver 1.0 Aug 13 00:03:27.759408 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:03:27.759485 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:03:27 UTC (1755043407) Aug 13 00:03:27.759495 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:03:27.759502 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:03:27.759509 kernel: Segment Routing with IPv6 Aug 13 00:03:27.759516 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:03:27.759523 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:03:27.759530 kernel: Key type dns_resolver registered Aug 13 00:03:27.759537 kernel: registered taskstats version 1 Aug 13 00:03:27.759547 kernel: Loading compiled-in X.509 certificates Aug 13 00:03:27.759555 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 72b807ae6dac6ab18c2f4ab9460d3472cf28c19d' Aug 13 00:03:27.759577 kernel: Key type .fscrypt registered Aug 13 00:03:27.759585 kernel: Key type fscrypt-provisioning registered Aug 13 00:03:27.759592 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:03:27.759600 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:03:27.759606 kernel: ima: No architecture policies found Aug 13 00:03:27.759614 kernel: clk: Disabling unused clocks Aug 13 00:03:27.759621 kernel: Freeing unused kernel memory: 36416K Aug 13 00:03:27.759629 kernel: Run /init as init process Aug 13 00:03:27.759636 kernel: with arguments: Aug 13 00:03:27.759643 kernel: /init Aug 13 00:03:27.759650 kernel: with environment: Aug 13 00:03:27.759656 kernel: HOME=/ Aug 13 00:03:27.759663 kernel: TERM=linux Aug 13 00:03:27.759669 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:03:27.759679 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:03:27.759696 systemd[1]: Detected virtualization kvm. Aug 13 00:03:27.759703 systemd[1]: Detected architecture arm64. Aug 13 00:03:27.759710 systemd[1]: Running in initrd. Aug 13 00:03:27.759717 systemd[1]: No hostname configured, using default hostname. Aug 13 00:03:27.759725 systemd[1]: Hostname set to . Aug 13 00:03:27.759732 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:03:27.759739 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:03:27.759746 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:03:27.759754 systemd[1]: Reached target cryptsetup.target. Aug 13 00:03:27.759761 systemd[1]: Reached target paths.target. Aug 13 00:03:27.759768 systemd[1]: Reached target slices.target. Aug 13 00:03:27.759775 systemd[1]: Reached target swap.target. Aug 13 00:03:27.759782 systemd[1]: Reached target timers.target. Aug 13 00:03:27.759790 systemd[1]: Listening on iscsid.socket. Aug 13 00:03:27.759797 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:03:27.759806 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:03:27.759813 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:03:27.759820 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:03:27.759827 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:03:27.759834 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:03:27.759842 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:03:27.759849 systemd[1]: Reached target sockets.target. Aug 13 00:03:27.759856 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:03:27.759864 systemd[1]: Finished network-cleanup.service. Aug 13 00:03:27.759872 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:03:27.759879 systemd[1]: Starting systemd-journald.service... Aug 13 00:03:27.759886 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:03:27.759894 systemd[1]: Starting systemd-resolved.service... Aug 13 00:03:27.759901 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:03:27.759908 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:03:27.759915 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:03:27.759922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:03:27.759929 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:03:27.759938 kernel: audit: type=1130 audit(1755043407.748:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.759945 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:03:27.759953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:03:27.759960 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:03:27.759971 systemd-journald[290]: Journal started Aug 13 00:03:27.760017 systemd-journald[290]: Runtime Journal (/run/log/journal/d7b5502ec8af4241af1d3bd100d42d81) is 6.0M, max 48.7M, 42.6M free. Aug 13 00:03:27.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.732881 systemd-modules-load[291]: Inserted module 'overlay' Aug 13 00:03:27.757669 systemd-resolved[292]: Positive Trust Anchors: Aug 13 00:03:27.768775 systemd[1]: Started systemd-journald.service. Aug 13 00:03:27.768798 kernel: audit: type=1130 audit(1755043407.760:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.768810 kernel: audit: type=1130 audit(1755043407.762:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.768819 kernel: Bridge firewalling registered Aug 13 00:03:27.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.757678 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:03:27.773308 kernel: audit: type=1130 audit(1755043407.768:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.757712 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:03:27.762125 systemd-resolved[292]: Defaulting to hostname 'linux'. Aug 13 00:03:27.763995 systemd[1]: Started systemd-resolved.service. Aug 13 00:03:27.768768 systemd-modules-load[291]: Inserted module 'br_netfilter' Aug 13 00:03:27.785456 kernel: SCSI subsystem initialized Aug 13 00:03:27.769641 systemd[1]: Reached target nss-lookup.target. Aug 13 00:03:27.785647 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:03:27.788130 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:03:27.796390 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:03:27.796415 kernel: audit: type=1130 audit(1755043407.786:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.796425 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:03:27.796434 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:03:27.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.796607 systemd-modules-load[291]: Inserted module 'dm_multipath' Aug 13 00:03:27.797526 dracut-cmdline[310]: dracut-dracut-053 Aug 13 00:03:27.798586 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:03:27.804323 kernel: audit: type=1130 audit(1755043407.798:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.804410 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:03:27.800202 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:03:27.810181 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:03:27.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.814612 kernel: audit: type=1130 audit(1755043407.810:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.861585 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:03:27.873599 kernel: iscsi: registered transport (tcp) Aug 13 00:03:27.888597 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:03:27.888617 kernel: QLogic iSCSI HBA Driver Aug 13 00:03:27.922332 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:03:27.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.924138 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:03:27.927739 kernel: audit: type=1130 audit(1755043407.922:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:27.970598 kernel: raid6: neonx8 gen() 13743 MB/s Aug 13 00:03:27.987587 kernel: raid6: neonx8 xor() 10757 MB/s Aug 13 00:03:28.004586 kernel: raid6: neonx4 gen() 13510 MB/s Aug 13 00:03:28.021581 kernel: raid6: neonx4 xor() 11147 MB/s Aug 13 00:03:28.038581 kernel: raid6: neonx2 gen() 13038 MB/s Aug 13 00:03:28.055583 kernel: raid6: neonx2 xor() 10540 MB/s Aug 13 00:03:28.072588 kernel: raid6: neonx1 gen() 10564 MB/s Aug 13 00:03:28.089588 kernel: raid6: neonx1 xor() 8783 MB/s Aug 13 00:03:28.106581 kernel: raid6: int64x8 gen() 6257 MB/s Aug 13 00:03:28.123586 kernel: raid6: int64x8 xor() 3538 MB/s Aug 13 00:03:28.140583 kernel: raid6: int64x4 gen() 7214 MB/s Aug 13 00:03:28.157580 kernel: raid6: int64x4 xor() 3851 MB/s Aug 13 00:03:28.174582 kernel: raid6: int64x2 gen() 6143 MB/s Aug 13 00:03:28.191581 kernel: raid6: int64x2 xor() 3314 MB/s Aug 13 00:03:28.208597 kernel: raid6: int64x1 gen() 5025 MB/s Aug 13 00:03:28.225761 kernel: raid6: int64x1 xor() 2641 MB/s Aug 13 00:03:28.225794 kernel: raid6: using algorithm neonx8 gen() 13743 MB/s Aug 13 00:03:28.225804 kernel: raid6: .... xor() 10757 MB/s, rmw enabled Aug 13 00:03:28.226948 kernel: raid6: using neon recovery algorithm Aug 13 00:03:28.237856 kernel: xor: measuring software checksum speed Aug 13 00:03:28.237881 kernel: 8regs : 17209 MB/sec Aug 13 00:03:28.238584 kernel: 32regs : 20681 MB/sec Aug 13 00:03:28.239782 kernel: arm64_neon : 23955 MB/sec Aug 13 00:03:28.239795 kernel: xor: using function: arm64_neon (23955 MB/sec) Aug 13 00:03:28.304609 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Aug 13 00:03:28.320266 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:03:28.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:28.321000 audit: BPF prog-id=7 op=LOAD Aug 13 00:03:28.325638 kernel: audit: type=1130 audit(1755043408.320:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:28.323000 audit: BPF prog-id=8 op=LOAD Aug 13 00:03:28.325063 systemd[1]: Starting systemd-udevd.service... Aug 13 00:03:28.342707 systemd-udevd[495]: Using default interface naming scheme 'v252'. Aug 13 00:03:28.346138 systemd[1]: Started systemd-udevd.service. Aug 13 00:03:28.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:28.348267 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:03:28.368865 dracut-pre-trigger[502]: rd.md=0: removing MD RAID activation Aug 13 00:03:28.404431 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:03:28.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:28.406328 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:03:28.451548 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:03:28.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:28.482123 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:03:28.488012 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:03:28.488027 kernel: GPT:9289727 != 19775487 Aug 13 00:03:28.488036 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:03:28.488044 kernel: GPT:9289727 != 19775487 Aug 13 00:03:28.488052 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:03:28.488060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:03:28.503591 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (551) Aug 13 00:03:28.504111 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:03:28.509087 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:03:28.510209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:03:28.515021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:03:28.520467 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:03:28.522346 systemd[1]: Starting disk-uuid.service... Aug 13 00:03:28.528393 disk-uuid[570]: Primary Header is updated. Aug 13 00:03:28.528393 disk-uuid[570]: Secondary Entries is updated. Aug 13 00:03:28.528393 disk-uuid[570]: Secondary Header is updated. Aug 13 00:03:28.532591 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:03:29.543687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:03:29.543777 disk-uuid[571]: The operation has completed successfully. Aug 13 00:03:29.563772 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:03:29.565336 systemd[1]: Finished disk-uuid.service. Aug 13 00:03:29.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.572447 systemd[1]: Starting verity-setup.service... Aug 13 00:03:29.587583 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:03:29.612974 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:03:29.615029 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:03:29.615931 systemd[1]: Finished verity-setup.service. Aug 13 00:03:29.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.668583 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:03:29.669205 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:03:29.670854 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:03:29.671590 systemd[1]: Starting ignition-setup.service... Aug 13 00:03:29.673629 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:03:29.680689 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:03:29.680730 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:03:29.680741 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:03:29.690190 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:03:29.697904 systemd[1]: Finished ignition-setup.service. Aug 13 00:03:29.699617 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:03:29.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.770256 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:03:29.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.771000 audit: BPF prog-id=9 op=LOAD Aug 13 00:03:29.772586 systemd[1]: Starting systemd-networkd.service... Aug 13 00:03:29.797963 ignition[661]: Ignition 2.14.0 Aug 13 00:03:29.797974 ignition[661]: Stage: fetch-offline Aug 13 00:03:29.798017 ignition[661]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:03:29.798027 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:03:29.798188 ignition[661]: parsed url from cmdline: "" Aug 13 00:03:29.798191 ignition[661]: no config URL provided Aug 13 00:03:29.798196 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:03:29.798203 ignition[661]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:03:29.798222 ignition[661]: op(1): [started] loading QEMU firmware config module Aug 13 00:03:29.798227 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:03:29.806708 ignition[661]: op(1): [finished] loading QEMU firmware config module Aug 13 00:03:29.813284 systemd-networkd[746]: lo: Link UP Aug 13 00:03:29.813298 systemd-networkd[746]: lo: Gained carrier Aug 13 00:03:29.813716 systemd-networkd[746]: Enumeration completed Aug 13 00:03:29.813897 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:03:29.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.815091 systemd-networkd[746]: eth0: Link UP Aug 13 00:03:29.815095 systemd-networkd[746]: eth0: Gained carrier Aug 13 00:03:29.817981 systemd[1]: Started systemd-networkd.service. Aug 13 00:03:29.819106 systemd[1]: Reached target network.target. Aug 13 00:03:29.820738 systemd[1]: Starting iscsiuio.service... Aug 13 00:03:29.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.829669 systemd[1]: Started iscsiuio.service. Aug 13 00:03:29.831396 systemd[1]: Starting iscsid.service... Aug 13 00:03:29.835232 iscsid[752]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:03:29.835232 iscsid[752]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:03:29.835232 iscsid[752]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:03:29.835232 iscsid[752]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:03:29.835232 iscsid[752]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:03:29.835232 iscsid[752]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:03:29.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.838672 systemd-networkd[746]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:03:29.842877 systemd[1]: Started iscsid.service. Aug 13 00:03:29.845248 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:03:29.860313 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:03:29.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.861460 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:03:29.863078 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:03:29.864760 systemd[1]: Reached target remote-fs.target. Aug 13 00:03:29.867516 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:03:29.874871 ignition[661]: parsing config with SHA512: 70bd2b18ddc48a9668185a6f790a69fad0226f4458e67bf404355f89fa996612cba65261ffc581e352a8f22091ca41d4bd40d159baeded20ea18fd7dee0886cb Aug 13 00:03:29.877155 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:03:29.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.884662 unknown[661]: fetched base config from "system" Aug 13 00:03:29.885140 ignition[661]: fetch-offline: fetch-offline passed Aug 13 00:03:29.884672 unknown[661]: fetched user config from "qemu" Aug 13 00:03:29.885196 ignition[661]: Ignition finished successfully Aug 13 00:03:29.889374 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:03:29.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.890432 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:03:29.891283 systemd[1]: Starting ignition-kargs.service... Aug 13 00:03:29.901002 ignition[767]: Ignition 2.14.0 Aug 13 00:03:29.901013 ignition[767]: Stage: kargs Aug 13 00:03:29.901115 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:03:29.903356 systemd[1]: Finished ignition-kargs.service. Aug 13 00:03:29.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.901125 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:03:29.902011 ignition[767]: kargs: kargs passed Aug 13 00:03:29.905931 systemd[1]: Starting ignition-disks.service... Aug 13 00:03:29.902056 ignition[767]: Ignition finished successfully Aug 13 00:03:29.913580 ignition[773]: Ignition 2.14.0 Aug 13 00:03:29.913593 ignition[773]: Stage: disks Aug 13 00:03:29.913713 ignition[773]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:03:29.913724 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:03:29.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.915453 systemd[1]: Finished ignition-disks.service. Aug 13 00:03:29.914709 ignition[773]: disks: disks passed Aug 13 00:03:29.916826 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:03:29.914758 ignition[773]: Ignition finished successfully Aug 13 00:03:29.918516 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:03:29.920200 systemd[1]: Reached target local-fs.target. Aug 13 00:03:29.921462 systemd[1]: Reached target sysinit.target. Aug 13 00:03:29.923057 systemd[1]: Reached target basic.target. Aug 13 00:03:29.925670 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:03:29.937706 systemd-fsck[781]: ROOT: clean, 629/553520 files, 56026/553472 blocks Aug 13 00:03:29.941285 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:03:29.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:29.943635 systemd[1]: Mounting sysroot.mount... Aug 13 00:03:29.949595 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:03:29.950028 systemd[1]: Mounted sysroot.mount. Aug 13 00:03:29.950863 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:03:29.953510 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:03:29.954544 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 00:03:29.954601 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:03:29.954627 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:03:29.956589 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:03:29.958745 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:03:29.963666 initrd-setup-root[791]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:03:29.968571 initrd-setup-root[799]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:03:29.973254 initrd-setup-root[807]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:03:29.977627 initrd-setup-root[815]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:03:30.006806 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:03:30.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:30.008664 systemd[1]: Starting ignition-mount.service... Aug 13 00:03:30.010116 systemd[1]: Starting sysroot-boot.service... Aug 13 00:03:30.014600 bash[832]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:03:30.022318 ignition[834]: INFO : Ignition 2.14.0 Aug 13 00:03:30.022318 ignition[834]: INFO : Stage: mount Aug 13 00:03:30.024000 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:03:30.024000 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:03:30.024000 ignition[834]: INFO : mount: mount passed Aug 13 00:03:30.024000 ignition[834]: INFO : Ignition finished successfully Aug 13 00:03:30.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:30.025074 systemd[1]: Finished ignition-mount.service. Aug 13 00:03:30.030336 systemd[1]: Finished sysroot-boot.service. Aug 13 00:03:30.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:30.624785 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:03:30.632539 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (842) Aug 13 00:03:30.632587 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:03:30.632597 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:03:30.633250 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:03:30.638011 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:03:30.640789 systemd[1]: Starting ignition-files.service... Aug 13 00:03:30.657134 ignition[862]: INFO : Ignition 2.14.0 Aug 13 00:03:30.657134 ignition[862]: INFO : Stage: files Aug 13 00:03:30.658917 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:03:30.658917 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:03:30.658917 ignition[862]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:03:30.665789 ignition[862]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:03:30.665789 ignition[862]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:03:30.671879 ignition[862]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:03:30.673392 ignition[862]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:03:30.673392 ignition[862]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:03:30.673392 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:03:30.673392 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 13 00:03:30.672805 unknown[862]: wrote ssh authorized keys file for user: core Aug 13 00:03:31.023831 systemd-networkd[746]: eth0: Gained IPv6LL Aug 13 00:03:31.469854 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:03:32.469099 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 13 00:03:32.471910 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:03:32.471910 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 13 00:03:32.704620 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:03:32.820620 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:03:32.820620 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:03:32.824167 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 13 00:03:33.114297 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:03:33.561736 ignition[862]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 13 00:03:33.561736 ignition[862]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:03:33.565878 ignition[862]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:03:33.607804 ignition[862]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:03:33.610283 ignition[862]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:03:33.610283 ignition[862]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:03:33.610283 ignition[862]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:03:33.610283 ignition[862]: INFO : files: files passed Aug 13 00:03:33.610283 ignition[862]: INFO : Ignition finished successfully Aug 13 00:03:33.621948 kernel: kauditd_printk_skb: 23 callbacks suppressed Aug 13 00:03:33.621972 kernel: audit: type=1130 audit(1755043413.612:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.610649 systemd[1]: Finished ignition-files.service. Aug 13 00:03:33.613647 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:03:33.629323 kernel: audit: type=1130 audit(1755043413.623:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.629345 kernel: audit: type=1131 audit(1755043413.623:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.629493 initrd-setup-root-after-ignition[888]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Aug 13 00:03:33.634052 kernel: audit: type=1130 audit(1755043413.629:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.618158 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:03:33.636531 initrd-setup-root-after-ignition[890]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:03:33.619000 systemd[1]: Starting ignition-quench.service... Aug 13 00:03:33.622909 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:03:33.622999 systemd[1]: Finished ignition-quench.service. Aug 13 00:03:33.628249 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:03:33.630423 systemd[1]: Reached target ignition-complete.target. Aug 13 00:03:33.635702 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:03:33.648107 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:03:33.648206 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:03:33.659304 kernel: audit: type=1130 audit(1755043413.648:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.659328 kernel: audit: type=1131 audit(1755043413.649:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.650012 systemd[1]: Reached target initrd-fs.target. Aug 13 00:03:33.660043 systemd[1]: Reached target initrd.target. Aug 13 00:03:33.661389 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:03:33.662269 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:03:33.672679 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:03:33.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.676592 kernel: audit: type=1130 audit(1755043413.672:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.674285 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:03:33.682573 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:03:33.683448 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:03:33.684954 systemd[1]: Stopped target timers.target. Aug 13 00:03:33.686323 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:03:33.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.686437 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:03:33.692534 kernel: audit: type=1131 audit(1755043413.687:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.687755 systemd[1]: Stopped target initrd.target. Aug 13 00:03:33.692003 systemd[1]: Stopped target basic.target. Aug 13 00:03:33.693304 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:03:33.694767 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:03:33.696107 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:03:33.697649 systemd[1]: Stopped target remote-fs.target. Aug 13 00:03:33.699045 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:03:33.700481 systemd[1]: Stopped target sysinit.target. Aug 13 00:03:33.701800 systemd[1]: Stopped target local-fs.target. Aug 13 00:03:33.703111 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:03:33.704402 systemd[1]: Stopped target swap.target. Aug 13 00:03:33.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.705653 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:03:33.711314 kernel: audit: type=1131 audit(1755043413.706:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.705786 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:03:33.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.707113 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:03:33.716301 kernel: audit: type=1131 audit(1755043413.711:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.710574 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:03:33.710690 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:03:33.712150 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:03:33.712243 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:03:33.715843 systemd[1]: Stopped target paths.target. Aug 13 00:03:33.717033 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:03:33.718604 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:03:33.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.720347 systemd[1]: Stopped target slices.target. Aug 13 00:03:33.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.721677 systemd[1]: Stopped target sockets.target. Aug 13 00:03:33.722921 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:03:33.729795 iscsid[752]: iscsid shutting down. Aug 13 00:03:33.723035 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:03:33.724660 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:03:33.724762 systemd[1]: Stopped ignition-files.service. Aug 13 00:03:33.726897 systemd[1]: Stopping ignition-mount.service... Aug 13 00:03:33.728123 systemd[1]: Stopping iscsid.service... Aug 13 00:03:33.735018 ignition[903]: INFO : Ignition 2.14.0 Aug 13 00:03:33.735018 ignition[903]: INFO : Stage: umount Aug 13 00:03:33.735018 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:03:33.735018 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:03:33.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.739383 ignition[903]: INFO : umount: umount passed Aug 13 00:03:33.739383 ignition[903]: INFO : Ignition finished successfully Aug 13 00:03:33.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.736118 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:03:33.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.736280 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:03:33.738386 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:03:33.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.739912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:03:33.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.740050 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:03:33.741450 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:03:33.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.741546 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:03:33.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.744455 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:03:33.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.744595 systemd[1]: Stopped iscsid.service. Aug 13 00:03:33.746020 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:03:33.746104 systemd[1]: Stopped ignition-mount.service. Aug 13 00:03:33.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.747308 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:03:33.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.747356 systemd[1]: Closed iscsid.socket. Aug 13 00:03:33.748628 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:03:33.748676 systemd[1]: Stopped ignition-disks.service. Aug 13 00:03:33.750137 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:03:33.750177 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:03:33.751725 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:03:33.751765 systemd[1]: Stopped ignition-setup.service. Aug 13 00:03:33.755494 systemd[1]: Stopping iscsiuio.service... Aug 13 00:03:33.756993 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:03:33.757402 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:03:33.757486 systemd[1]: Stopped iscsiuio.service. Aug 13 00:03:33.758517 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:03:33.758598 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:03:33.760530 systemd[1]: Stopped target network.target. Aug 13 00:03:33.761976 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:03:33.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.762010 systemd[1]: Closed iscsiuio.socket. Aug 13 00:03:33.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.780000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:03:33.763925 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:03:33.765663 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:03:33.776200 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:03:33.776308 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:03:33.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.777367 systemd-networkd[746]: eth0: DHCPv6 lease lost Aug 13 00:03:33.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.778828 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:03:33.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.787000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:03:33.778923 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:03:33.780554 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:03:33.780609 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:03:33.782415 systemd[1]: Stopping network-cleanup.service... Aug 13 00:03:33.783841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:03:33.783903 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:03:33.785537 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:03:33.785614 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:03:33.787606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:03:33.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.787677 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:03:33.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.791945 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:03:33.793736 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:03:33.798856 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:03:33.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.798988 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:03:33.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.800650 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:03:33.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.800752 systemd[1]: Stopped network-cleanup.service. Aug 13 00:03:33.802154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:03:33.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.802195 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:03:33.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.803552 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:03:33.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.803607 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:03:33.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:33.804965 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:03:33.805016 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:03:33.807090 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:03:33.807141 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:03:33.808723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:03:33.808763 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:03:33.811180 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:03:33.812106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:03:33.812164 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:03:33.814285 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:03:33.814381 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:03:33.815723 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:03:33.815773 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:03:33.817082 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:03:33.817161 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:03:33.818726 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:03:33.821112 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:03:33.828239 systemd[1]: Switching root. Aug 13 00:03:33.846176 systemd-journald[290]: Journal stopped Aug 13 00:03:35.979396 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Aug 13 00:03:35.979451 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:03:35.979466 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:03:35.979477 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:03:35.979487 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:03:35.979499 kernel: SELinux: policy capability open_perms=1 Aug 13 00:03:35.979509 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:03:35.979518 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:03:35.979531 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:03:35.979541 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:03:35.979551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:03:35.979577 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:03:35.979589 systemd[1]: Successfully loaded SELinux policy in 35.384ms. Aug 13 00:03:35.979608 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.898ms. Aug 13 00:03:35.979620 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:03:35.979631 systemd[1]: Detected virtualization kvm. Aug 13 00:03:35.979642 systemd[1]: Detected architecture arm64. Aug 13 00:03:35.979653 systemd[1]: Detected first boot. Aug 13 00:03:35.979672 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:03:35.979685 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:03:35.979697 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:03:35.979709 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:03:35.979723 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:03:35.979737 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:03:35.979749 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:03:35.979760 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:03:35.979771 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:03:35.979783 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:03:35.979794 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:03:35.979805 systemd[1]: Created slice system-getty.slice. Aug 13 00:03:35.979817 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:03:35.979828 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:03:35.979842 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:03:35.979854 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:03:35.979864 systemd[1]: Created slice user.slice. Aug 13 00:03:35.979874 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:03:35.979885 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:03:35.979895 systemd[1]: Set up automount boot.automount. Aug 13 00:03:35.979906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:03:35.979918 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:03:35.979928 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:03:35.979942 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:03:35.979953 systemd[1]: Reached target integritysetup.target. Aug 13 00:03:35.979964 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:03:35.979974 systemd[1]: Reached target remote-fs.target. Aug 13 00:03:35.979985 systemd[1]: Reached target slices.target. Aug 13 00:03:35.979995 systemd[1]: Reached target swap.target. Aug 13 00:03:35.980008 systemd[1]: Reached target torcx.target. Aug 13 00:03:35.980019 systemd[1]: Reached target veritysetup.target. Aug 13 00:03:35.980029 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:03:35.980040 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:03:35.980050 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:03:35.980061 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:03:35.980072 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:03:35.980083 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:03:35.980093 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:03:35.980103 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:03:35.980116 systemd[1]: Mounting media.mount... Aug 13 00:03:35.980126 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:03:35.980136 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:03:35.980147 systemd[1]: Mounting tmp.mount... Aug 13 00:03:35.980157 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:03:35.980168 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:03:35.980178 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:03:35.980189 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:03:35.980199 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:03:35.980211 systemd[1]: Starting modprobe@drm.service... Aug 13 00:03:35.980221 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:03:35.980231 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:03:35.980241 systemd[1]: Starting modprobe@loop.service... Aug 13 00:03:35.980252 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:03:35.980262 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:03:35.980273 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:03:35.980283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:03:35.980294 kernel: loop: module loaded Aug 13 00:03:35.980310 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:03:35.980322 kernel: fuse: init (API version 7.34) Aug 13 00:03:35.980334 systemd[1]: Stopped systemd-journald.service. Aug 13 00:03:35.980345 systemd[1]: Starting systemd-journald.service... Aug 13 00:03:35.980356 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:03:35.980366 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:03:35.980377 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:03:35.980388 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:03:35.980398 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:03:35.980408 systemd[1]: Stopped verity-setup.service. Aug 13 00:03:35.980419 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:03:35.980431 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:03:35.980441 systemd[1]: Mounted media.mount. Aug 13 00:03:35.980451 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:03:35.980462 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:03:35.980472 systemd[1]: Mounted tmp.mount. Aug 13 00:03:35.980482 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:03:35.980493 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:03:35.980503 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:03:35.980514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:03:35.980526 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:03:35.980537 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:03:35.980547 systemd[1]: Finished modprobe@drm.service. Aug 13 00:03:35.980569 systemd-journald[1005]: Journal started Aug 13 00:03:35.980611 systemd-journald[1005]: Runtime Journal (/run/log/journal/d7b5502ec8af4241af1d3bd100d42d81) is 6.0M, max 48.7M, 42.6M free. Aug 13 00:03:33.912000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:03:34.018000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:03:34.018000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:03:34.018000 audit: BPF prog-id=10 op=LOAD Aug 13 00:03:34.018000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:03:34.019000 audit: BPF prog-id=11 op=LOAD Aug 13 00:03:34.019000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:03:34.091000 audit[936]: AVC avc: denied { associate } for pid=936 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:03:34.091000 audit[936]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8ac a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=919 pid=936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:34.091000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:03:34.092000 audit[936]: AVC avc: denied { associate } for pid=936 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:03:34.092000 audit[936]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd985 a2=1ed a3=0 items=2 ppid=919 pid=936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:34.092000 audit: CWD cwd="/" Aug 13 00:03:34.092000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:34.092000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:03:34.092000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:03:35.831000 audit: BPF prog-id=12 op=LOAD Aug 13 00:03:35.831000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:03:35.831000 audit: BPF prog-id=13 op=LOAD Aug 13 00:03:35.831000 audit: BPF prog-id=14 op=LOAD Aug 13 00:03:35.831000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:03:35.831000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:03:35.832000 audit: BPF prog-id=15 op=LOAD Aug 13 00:03:35.832000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:03:35.832000 audit: BPF prog-id=16 op=LOAD Aug 13 00:03:35.832000 audit: BPF prog-id=17 op=LOAD Aug 13 00:03:35.832000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:03:35.832000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:03:35.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.842000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:03:35.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.938000 audit: BPF prog-id=18 op=LOAD Aug 13 00:03:35.938000 audit: BPF prog-id=19 op=LOAD Aug 13 00:03:35.938000 audit: BPF prog-id=20 op=LOAD Aug 13 00:03:35.938000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:03:35.938000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:03:35.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.977000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:03:35.977000 audit[1005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd6833a90 a2=4000 a3=1 items=0 ppid=1 pid=1005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:35.977000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:03:35.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:34.088206 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:03:35.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.829973 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:03:34.089941 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:03:35.829987 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:03:34.089962 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:03:35.833383 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:03:34.089995 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:03:34.090005 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:03:34.090039 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:03:34.090052 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:03:34.090384 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:03:34.090421 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:03:35.982581 systemd[1]: Started systemd-journald.service. Aug 13 00:03:34.090433 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:03:34.091354 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:03:34.091387 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:03:34.091406 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:03:34.091420 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:03:34.091438 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:03:34.091454 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:34Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:03:35.570251 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:03:35.570519 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:03:35.570730 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:03:35.570906 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:03:35.570963 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:03:35.571022 /usr/lib/systemd/system-generators/torcx-generator[936]: time="2025-08-13T00:03:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:03:35.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.984245 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:03:35.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.985580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:03:35.985727 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:03:35.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.987071 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:03:35.987255 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:03:35.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.988517 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:03:35.988671 systemd[1]: Finished modprobe@loop.service. Aug 13 00:03:35.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.989852 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:03:35.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.991199 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:03:35.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.992623 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:03:35.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:35.994067 systemd[1]: Reached target network-pre.target. Aug 13 00:03:35.996507 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:03:35.998596 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:03:35.999506 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:03:36.005584 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:03:36.007780 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:03:36.008793 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:03:36.010096 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:03:36.011127 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:03:36.012247 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:03:36.014862 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:03:36.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.019067 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:03:36.020321 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:03:36.021510 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:03:36.024101 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:03:36.035116 udevadm[1037]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:03:36.036879 systemd-journald[1005]: Time spent on flushing to /var/log/journal/d7b5502ec8af4241af1d3bd100d42d81 is 15.675ms for 1002 entries. Aug 13 00:03:36.036879 systemd-journald[1005]: System Journal (/var/log/journal/d7b5502ec8af4241af1d3bd100d42d81) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:03:36.065302 systemd-journald[1005]: Received client request to flush runtime journal. Aug 13 00:03:36.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.042162 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:03:36.043968 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:03:36.045055 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:03:36.059654 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:03:36.066280 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:03:36.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.418029 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:03:36.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.419000 audit: BPF prog-id=21 op=LOAD Aug 13 00:03:36.419000 audit: BPF prog-id=22 op=LOAD Aug 13 00:03:36.419000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:03:36.419000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:03:36.420557 systemd[1]: Starting systemd-udevd.service... Aug 13 00:03:36.460516 systemd-udevd[1039]: Using default interface naming scheme 'v252'. Aug 13 00:03:36.482671 systemd[1]: Started systemd-udevd.service. Aug 13 00:03:36.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.484000 audit: BPF prog-id=23 op=LOAD Aug 13 00:03:36.485471 systemd[1]: Starting systemd-networkd.service... Aug 13 00:03:36.492000 audit: BPF prog-id=24 op=LOAD Aug 13 00:03:36.492000 audit: BPF prog-id=25 op=LOAD Aug 13 00:03:36.493000 audit: BPF prog-id=26 op=LOAD Aug 13 00:03:36.494520 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:03:36.509968 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Aug 13 00:03:36.523946 systemd[1]: Started systemd-userdbd.service. Aug 13 00:03:36.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.541910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:03:36.588579 systemd-networkd[1047]: lo: Link UP Aug 13 00:03:36.588592 systemd-networkd[1047]: lo: Gained carrier Aug 13 00:03:36.591588 systemd-networkd[1047]: Enumeration completed Aug 13 00:03:36.591730 systemd[1]: Started systemd-networkd.service. Aug 13 00:03:36.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.593059 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:03:36.593100 systemd-networkd[1047]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:03:36.595527 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:03:36.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.597974 systemd-networkd[1047]: eth0: Link UP Aug 13 00:03:36.597985 systemd-networkd[1047]: eth0: Gained carrier Aug 13 00:03:36.610238 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:03:36.640706 systemd-networkd[1047]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:03:36.645684 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:03:36.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.646927 systemd[1]: Reached target cryptsetup.target. Aug 13 00:03:36.649978 systemd[1]: Starting lvm2-activation.service... Aug 13 00:03:36.658197 lvm[1073]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:03:36.699713 systemd[1]: Finished lvm2-activation.service. Aug 13 00:03:36.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.700786 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:03:36.701781 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:03:36.701821 systemd[1]: Reached target local-fs.target. Aug 13 00:03:36.702687 systemd[1]: Reached target machines.target. Aug 13 00:03:36.704923 systemd[1]: Starting ldconfig.service... Aug 13 00:03:36.706228 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:03:36.706311 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:03:36.707868 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:03:36.709919 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:03:36.712464 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:03:36.714814 systemd[1]: Starting systemd-sysext.service... Aug 13 00:03:36.719298 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1075 (bootctl) Aug 13 00:03:36.721055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:03:36.726432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:03:36.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.732802 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:03:36.739275 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:03:36.739468 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:03:36.752587 kernel: loop0: detected capacity change from 0 to 207008 Aug 13 00:03:36.795353 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:03:36.796307 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:03:36.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.806639 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:03:36.811192 systemd-fsck[1087]: fsck.fat 4.2 (2021-01-31) Aug 13 00:03:36.811192 systemd-fsck[1087]: /dev/vda1: 236 files, 117307/258078 clusters Aug 13 00:03:36.813246 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:03:36.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.817280 systemd[1]: Mounting boot.mount... Aug 13 00:03:36.828795 systemd[1]: Mounted boot.mount. Aug 13 00:03:36.831610 kernel: loop1: detected capacity change from 0 to 207008 Aug 13 00:03:36.838634 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:03:36.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.841097 (sd-sysext)[1092]: Using extensions 'kubernetes'. Aug 13 00:03:36.841455 (sd-sysext)[1092]: Merged extensions into '/usr'. Aug 13 00:03:36.863739 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:03:36.866645 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:03:36.869801 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:03:36.874824 systemd[1]: Starting modprobe@loop.service... Aug 13 00:03:36.875869 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:03:36.876091 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:03:36.877438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:03:36.877918 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:03:36.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.879728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:03:36.879922 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:03:36.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.881371 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:03:36.881486 systemd[1]: Finished modprobe@loop.service. Aug 13 00:03:36.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.883090 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:03:36.883261 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:03:36.963192 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:03:36.969008 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:03:36.971268 systemd[1]: Finished systemd-sysext.service. Aug 13 00:03:36.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.972447 ldconfig[1074]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:03:36.973698 systemd[1]: Starting ensure-sysext.service... Aug 13 00:03:36.976219 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:03:36.979703 systemd[1]: Finished ldconfig.service. Aug 13 00:03:36.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:36.982421 systemd[1]: Reloading. Aug 13 00:03:36.992408 systemd-tmpfiles[1099]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:03:36.995201 systemd-tmpfiles[1099]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:03:36.998618 systemd-tmpfiles[1099]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:03:37.037141 /usr/lib/systemd/system-generators/torcx-generator[1119]: time="2025-08-13T00:03:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:03:37.037815 /usr/lib/systemd/system-generators/torcx-generator[1119]: time="2025-08-13T00:03:37Z" level=info msg="torcx already run" Aug 13 00:03:37.117330 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:03:37.117350 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:03:37.134207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:03:37.178000 audit: BPF prog-id=27 op=LOAD Aug 13 00:03:37.178000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:03:37.178000 audit: BPF prog-id=28 op=LOAD Aug 13 00:03:37.178000 audit: BPF prog-id=29 op=LOAD Aug 13 00:03:37.178000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:03:37.178000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:03:37.178000 audit: BPF prog-id=30 op=LOAD Aug 13 00:03:37.178000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:03:37.178000 audit: BPF prog-id=31 op=LOAD Aug 13 00:03:37.178000 audit: BPF prog-id=32 op=LOAD Aug 13 00:03:37.178000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:03:37.178000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:03:37.179000 audit: BPF prog-id=33 op=LOAD Aug 13 00:03:37.179000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:03:37.179000 audit: BPF prog-id=34 op=LOAD Aug 13 00:03:37.179000 audit: BPF prog-id=35 op=LOAD Aug 13 00:03:37.179000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:03:37.179000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:03:37.184511 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:03:37.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.189364 systemd[1]: Starting audit-rules.service... Aug 13 00:03:37.191940 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:03:37.194860 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:03:37.197000 audit: BPF prog-id=36 op=LOAD Aug 13 00:03:37.199612 systemd[1]: Starting systemd-resolved.service... Aug 13 00:03:37.201000 audit: BPF prog-id=37 op=LOAD Aug 13 00:03:37.203729 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:03:37.205858 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:03:37.211083 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.211000 audit[1169]: SYSTEM_BOOT pid=1169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.212461 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:03:37.214890 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:03:37.217116 systemd[1]: Starting modprobe@loop.service... Aug 13 00:03:37.218842 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.219026 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:03:37.220873 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:03:37.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.222424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:03:37.222554 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:03:37.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.223963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:03:37.224092 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:03:37.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.225632 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:03:37.225755 systemd[1]: Finished modprobe@loop.service. Aug 13 00:03:37.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.227131 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:03:37.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.230360 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:03:37.230504 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.232306 systemd[1]: Starting systemd-update-done.service... Aug 13 00:03:37.233302 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:03:37.234253 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:03:37.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.238087 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.239426 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:03:37.241644 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:03:37.244057 systemd[1]: Starting modprobe@loop.service... Aug 13 00:03:37.244959 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.245196 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:03:37.245398 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:03:37.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.246465 systemd[1]: Finished systemd-update-done.service. Aug 13 00:03:37.247825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:03:37.247946 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:03:37.249171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:03:37.249288 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:03:37.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.250892 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:03:37.251056 systemd[1]: Finished modprobe@loop.service. Aug 13 00:03:37.252374 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:03:37.252488 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:03:37.259306 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.260945 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:03:37.263183 systemd[1]: Starting modprobe@drm.service... Aug 13 00:03:37.269148 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:03:37.271163 systemd[1]: Starting modprobe@loop.service... Aug 13 00:03:37.272219 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.272373 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:03:37.273940 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:03:37.275064 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:03:37.275724 systemd-timesyncd[1168]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:03:37.275000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:03:37.275000 audit[1189]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe10a3b20 a2=420 a3=0 items=0 ppid=1158 pid=1189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:03:37.275000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:03:37.276735 augenrules[1189]: No rules Aug 13 00:03:37.275784 systemd-timesyncd[1168]: Initial clock synchronization to Wed 2025-08-13 00:03:37.673486 UTC. Aug 13 00:03:37.276246 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:03:37.277963 systemd[1]: Finished audit-rules.service. Aug 13 00:03:37.279186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:03:37.279312 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:03:37.280622 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:03:37.280762 systemd[1]: Finished modprobe@drm.service. Aug 13 00:03:37.281950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:03:37.282071 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:03:37.283355 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:03:37.283480 systemd[1]: Finished modprobe@loop.service. Aug 13 00:03:37.285307 systemd[1]: Reached target time-set.target. Aug 13 00:03:37.286515 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:03:37.286577 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.286927 systemd[1]: Finished ensure-sysext.service. Aug 13 00:03:37.290851 systemd-resolved[1162]: Positive Trust Anchors: Aug 13 00:03:37.291095 systemd-resolved[1162]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:03:37.291184 systemd-resolved[1162]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:03:37.322413 systemd-resolved[1162]: Defaulting to hostname 'linux'. Aug 13 00:03:37.324131 systemd[1]: Started systemd-resolved.service. Aug 13 00:03:37.325155 systemd[1]: Reached target network.target. Aug 13 00:03:37.326027 systemd[1]: Reached target nss-lookup.target. Aug 13 00:03:37.326892 systemd[1]: Reached target sysinit.target. Aug 13 00:03:37.327791 systemd[1]: Started motdgen.path. Aug 13 00:03:37.328554 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:03:37.329886 systemd[1]: Started logrotate.timer. Aug 13 00:03:37.330786 systemd[1]: Started mdadm.timer. Aug 13 00:03:37.331497 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:03:37.332448 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:03:37.332483 systemd[1]: Reached target paths.target. Aug 13 00:03:37.333309 systemd[1]: Reached target timers.target. Aug 13 00:03:37.334520 systemd[1]: Listening on dbus.socket. Aug 13 00:03:37.336392 systemd[1]: Starting docker.socket... Aug 13 00:03:37.340826 systemd[1]: Listening on sshd.socket. Aug 13 00:03:37.341764 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:03:37.342233 systemd[1]: Listening on docker.socket. Aug 13 00:03:37.343160 systemd[1]: Reached target sockets.target. Aug 13 00:03:37.344009 systemd[1]: Reached target basic.target. Aug 13 00:03:37.344886 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.344918 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:03:37.346054 systemd[1]: Starting containerd.service... Aug 13 00:03:37.347849 systemd[1]: Starting dbus.service... Aug 13 00:03:37.349815 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:03:37.352294 systemd[1]: Starting extend-filesystems.service... Aug 13 00:03:37.353322 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:03:37.354784 systemd[1]: Starting motdgen.service... Aug 13 00:03:37.356837 systemd[1]: Starting prepare-helm.service... Aug 13 00:03:37.359973 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:03:37.362324 systemd[1]: Starting sshd-keygen.service... Aug 13 00:03:37.365365 systemd[1]: Starting systemd-logind.service... Aug 13 00:03:37.366277 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:03:37.366349 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:03:37.366814 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:03:37.368745 systemd[1]: Starting update-engine.service... Aug 13 00:03:37.370262 jq[1200]: false Aug 13 00:03:37.371832 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:03:37.377287 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:03:37.377693 jq[1218]: true Aug 13 00:03:37.377500 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:03:37.379044 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:03:37.379286 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:03:37.395510 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:03:37.395962 systemd[1]: Finished motdgen.service. Aug 13 00:03:37.396643 jq[1222]: true Aug 13 00:03:37.402117 extend-filesystems[1201]: Found loop1 Aug 13 00:03:37.403312 extend-filesystems[1201]: Found vda Aug 13 00:03:37.404089 dbus-daemon[1199]: [system] SELinux support is enabled Aug 13 00:03:37.404243 systemd[1]: Started dbus.service. Aug 13 00:03:37.404575 extend-filesystems[1201]: Found vda1 Aug 13 00:03:37.405931 extend-filesystems[1201]: Found vda2 Aug 13 00:03:37.406722 extend-filesystems[1201]: Found vda3 Aug 13 00:03:37.406877 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:03:37.406899 systemd[1]: Reached target system-config.target. Aug 13 00:03:37.407552 extend-filesystems[1201]: Found usr Aug 13 00:03:37.408961 extend-filesystems[1201]: Found vda4 Aug 13 00:03:37.408961 extend-filesystems[1201]: Found vda6 Aug 13 00:03:37.408383 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:03:37.412051 extend-filesystems[1201]: Found vda7 Aug 13 00:03:37.412051 extend-filesystems[1201]: Found vda9 Aug 13 00:03:37.412051 extend-filesystems[1201]: Checking size of /dev/vda9 Aug 13 00:03:37.408398 systemd[1]: Reached target user-config.target. Aug 13 00:03:37.422147 tar[1221]: linux-arm64/LICENSE Aug 13 00:03:37.422147 tar[1221]: linux-arm64/helm Aug 13 00:03:37.434588 systemd-logind[1211]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:03:37.439935 systemd-logind[1211]: New seat seat0. Aug 13 00:03:37.442904 extend-filesystems[1201]: Resized partition /dev/vda9 Aug 13 00:03:37.452865 extend-filesystems[1248]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:03:37.451992 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:03:37.457758 bash[1245]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:03:37.459224 systemd[1]: Started systemd-logind.service. Aug 13 00:03:37.463064 update_engine[1216]: I0813 00:03:37.462604 1216 main.cc:92] Flatcar Update Engine starting Aug 13 00:03:37.464590 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:03:37.465789 systemd[1]: Started update-engine.service. Aug 13 00:03:37.466139 update_engine[1216]: I0813 00:03:37.466082 1216 update_check_scheduler.cc:74] Next update check in 10m29s Aug 13 00:03:37.469034 systemd[1]: Started locksmithd.service. Aug 13 00:03:37.489587 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:03:37.507747 env[1223]: time="2025-08-13T00:03:37.504866960Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:03:37.507999 extend-filesystems[1248]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:03:37.507999 extend-filesystems[1248]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:03:37.507999 extend-filesystems[1248]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:03:37.513607 extend-filesystems[1201]: Resized filesystem in /dev/vda9 Aug 13 00:03:37.509909 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:03:37.510068 systemd[1]: Finished extend-filesystems.service. Aug 13 00:03:37.525970 env[1223]: time="2025-08-13T00:03:37.525923640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:03:37.526289 env[1223]: time="2025-08-13T00:03:37.526092080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:03:37.527596 env[1223]: time="2025-08-13T00:03:37.527522680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:03:37.527596 env[1223]: time="2025-08-13T00:03:37.527557320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:03:37.527844 env[1223]: time="2025-08-13T00:03:37.527816480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:03:37.527844 env[1223]: time="2025-08-13T00:03:37.527839480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:03:37.527844 env[1223]: time="2025-08-13T00:03:37.527853360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:03:37.527844 env[1223]: time="2025-08-13T00:03:37.527864160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:03:37.527844 env[1223]: time="2025-08-13T00:03:37.527935960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:03:37.528248 env[1223]: time="2025-08-13T00:03:37.528224080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:03:37.528407 env[1223]: time="2025-08-13T00:03:37.528345160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:03:37.528407 env[1223]: time="2025-08-13T00:03:37.528362760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:03:37.528474 env[1223]: time="2025-08-13T00:03:37.528415480Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:03:37.528474 env[1223]: time="2025-08-13T00:03:37.528427400Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:03:37.537944 env[1223]: time="2025-08-13T00:03:37.537903400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:03:37.537944 env[1223]: time="2025-08-13T00:03:37.537944480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:03:37.538068 env[1223]: time="2025-08-13T00:03:37.537958840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:03:37.538068 env[1223]: time="2025-08-13T00:03:37.537992680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538068 env[1223]: time="2025-08-13T00:03:37.538009840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538068 env[1223]: time="2025-08-13T00:03:37.538023760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538068 env[1223]: time="2025-08-13T00:03:37.538036920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538456 env[1223]: time="2025-08-13T00:03:37.538406920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538456 env[1223]: time="2025-08-13T00:03:37.538432440Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538456 env[1223]: time="2025-08-13T00:03:37.538446360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538547 env[1223]: time="2025-08-13T00:03:37.538458520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538547 env[1223]: time="2025-08-13T00:03:37.538471960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:03:37.538726 env[1223]: time="2025-08-13T00:03:37.538617720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:03:37.538726 env[1223]: time="2025-08-13T00:03:37.538713800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:03:37.538977 env[1223]: time="2025-08-13T00:03:37.538932680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:03:37.538977 env[1223]: time="2025-08-13T00:03:37.538965600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539047 env[1223]: time="2025-08-13T00:03:37.538979880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:03:37.539148 env[1223]: time="2025-08-13T00:03:37.539088920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539148 env[1223]: time="2025-08-13T00:03:37.539105560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539148 env[1223]: time="2025-08-13T00:03:37.539117880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539148 env[1223]: time="2025-08-13T00:03:37.539128480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539148 env[1223]: time="2025-08-13T00:03:37.539139760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539148 env[1223]: time="2025-08-13T00:03:37.539151080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539288 env[1223]: time="2025-08-13T00:03:37.539162960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539288 env[1223]: time="2025-08-13T00:03:37.539175080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539288 env[1223]: time="2025-08-13T00:03:37.539187120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:03:37.539357 env[1223]: time="2025-08-13T00:03:37.539319200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539357 env[1223]: time="2025-08-13T00:03:37.539335000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539357 env[1223]: time="2025-08-13T00:03:37.539347440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539418 env[1223]: time="2025-08-13T00:03:37.539359320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:03:37.539418 env[1223]: time="2025-08-13T00:03:37.539375480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:03:37.539418 env[1223]: time="2025-08-13T00:03:37.539386720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:03:37.539418 env[1223]: time="2025-08-13T00:03:37.539404120Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:03:37.539491 env[1223]: time="2025-08-13T00:03:37.539440480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:03:37.539754 env[1223]: time="2025-08-13T00:03:37.539642320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:03:37.539754 env[1223]: time="2025-08-13T00:03:37.539715840Z" level=info msg="Connect containerd service" Aug 13 00:03:37.539754 env[1223]: time="2025-08-13T00:03:37.539748480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:03:37.540480 env[1223]: time="2025-08-13T00:03:37.540355360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:03:37.541980 env[1223]: time="2025-08-13T00:03:37.540622760Z" level=info msg="Start subscribing containerd event" Aug 13 00:03:37.541980 env[1223]: time="2025-08-13T00:03:37.540689800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:03:37.541980 env[1223]: time="2025-08-13T00:03:37.540694720Z" level=info msg="Start recovering state" Aug 13 00:03:37.541980 env[1223]: time="2025-08-13T00:03:37.540728120Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:03:37.541980 env[1223]: time="2025-08-13T00:03:37.540772920Z" level=info msg="containerd successfully booted in 0.043080s" Aug 13 00:03:37.540856 systemd[1]: Started containerd.service. Aug 13 00:03:37.542376 env[1223]: time="2025-08-13T00:03:37.542216520Z" level=info msg="Start event monitor" Aug 13 00:03:37.542376 env[1223]: time="2025-08-13T00:03:37.542259440Z" level=info msg="Start snapshots syncer" Aug 13 00:03:37.542376 env[1223]: time="2025-08-13T00:03:37.542276800Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:03:37.542376 env[1223]: time="2025-08-13T00:03:37.542287120Z" level=info msg="Start streaming server" Aug 13 00:03:37.565255 locksmithd[1250]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:03:37.863388 tar[1221]: linux-arm64/README.md Aug 13 00:03:37.867783 systemd[1]: Finished prepare-helm.service. Aug 13 00:03:38.575775 systemd-networkd[1047]: eth0: Gained IPv6LL Aug 13 00:03:38.577550 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:03:38.578980 systemd[1]: Reached target network-online.target. Aug 13 00:03:38.581559 systemd[1]: Starting kubelet.service... Aug 13 00:03:39.285564 systemd[1]: Started kubelet.service. Aug 13 00:03:39.711644 sshd_keygen[1220]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:03:39.729863 systemd[1]: Finished sshd-keygen.service. Aug 13 00:03:39.732514 systemd[1]: Starting issuegen.service... Aug 13 00:03:39.738020 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:03:39.738217 systemd[1]: Finished issuegen.service. Aug 13 00:03:39.740952 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:03:39.748084 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:03:39.750840 systemd[1]: Started getty@tty1.service. Aug 13 00:03:39.753051 systemd[1]: Started serial-getty@ttyAMA0.service. Aug 13 00:03:39.754549 systemd[1]: Reached target getty.target. Aug 13 00:03:39.755559 systemd[1]: Reached target multi-user.target. Aug 13 00:03:39.758067 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:03:39.766235 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:03:39.766463 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:03:39.767763 systemd[1]: Startup finished in 611ms (kernel) + 6.297s (initrd) + 5.898s (userspace) = 12.807s. Aug 13 00:03:39.794898 kubelet[1268]: E0813 00:03:39.794828 1268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:03:39.796508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:03:39.796651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:03:40.134210 systemd[1]: Created slice system-sshd.slice. Aug 13 00:03:40.135553 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:41812.service. Aug 13 00:03:40.190012 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 41812 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:03:40.192256 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:03:40.201859 systemd-logind[1211]: New session 1 of user core. Aug 13 00:03:40.202816 systemd[1]: Created slice user-500.slice. Aug 13 00:03:40.204081 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:03:40.212886 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:03:40.214364 systemd[1]: Starting user@500.service... Aug 13 00:03:40.218135 (systemd)[1293]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:03:40.283681 systemd[1293]: Queued start job for default target default.target. Aug 13 00:03:40.284174 systemd[1293]: Reached target paths.target. Aug 13 00:03:40.284207 systemd[1293]: Reached target sockets.target. Aug 13 00:03:40.284228 systemd[1293]: Reached target timers.target. Aug 13 00:03:40.284239 systemd[1293]: Reached target basic.target. Aug 13 00:03:40.284280 systemd[1293]: Reached target default.target. Aug 13 00:03:40.284305 systemd[1293]: Startup finished in 59ms. Aug 13 00:03:40.284514 systemd[1]: Started user@500.service. Aug 13 00:03:40.286384 systemd[1]: Started session-1.scope. Aug 13 00:03:40.341263 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:41816.service. Aug 13 00:03:40.383654 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 41816 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:03:40.385060 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:03:40.390012 systemd[1]: Started session-2.scope. Aug 13 00:03:40.390495 systemd-logind[1211]: New session 2 of user core. Aug 13 00:03:40.448461 sshd[1302]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:40.451788 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:41816.service: Deactivated successfully. Aug 13 00:03:40.452505 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:03:40.453048 systemd-logind[1211]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:03:40.454187 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:41830.service. Aug 13 00:03:40.454946 systemd-logind[1211]: Removed session 2. Aug 13 00:03:40.490209 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 41830 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:03:40.491530 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:03:40.495213 systemd-logind[1211]: New session 3 of user core. Aug 13 00:03:40.496060 systemd[1]: Started session-3.scope. Aug 13 00:03:40.549175 sshd[1308]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:40.554125 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:41830.service: Deactivated successfully. Aug 13 00:03:40.554888 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:03:40.555489 systemd-logind[1211]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:03:40.556787 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:41834.service. Aug 13 00:03:40.557590 systemd-logind[1211]: Removed session 3. Aug 13 00:03:40.602479 sshd[1314]: Accepted publickey for core from 10.0.0.1 port 41834 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:03:40.604168 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:03:40.607847 systemd-logind[1211]: New session 4 of user core. Aug 13 00:03:40.608720 systemd[1]: Started session-4.scope. Aug 13 00:03:40.665056 sshd[1314]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:40.668084 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:41834.service: Deactivated successfully. Aug 13 00:03:40.668745 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:03:40.669250 systemd-logind[1211]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:03:40.670293 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:41848.service. Aug 13 00:03:40.670964 systemd-logind[1211]: Removed session 4. Aug 13 00:03:40.706271 sshd[1320]: Accepted publickey for core from 10.0.0.1 port 41848 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:03:40.707688 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:03:40.711353 systemd-logind[1211]: New session 5 of user core. Aug 13 00:03:40.712266 systemd[1]: Started session-5.scope. Aug 13 00:03:40.795451 sudo[1324]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:03:40.795723 sudo[1324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:03:40.864792 systemd[1]: Starting docker.service... Aug 13 00:03:41.006190 env[1336]: time="2025-08-13T00:03:41.005735029Z" level=info msg="Starting up" Aug 13 00:03:41.011179 env[1336]: time="2025-08-13T00:03:41.011137043Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:03:41.011308 env[1336]: time="2025-08-13T00:03:41.011292867Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:03:41.011377 env[1336]: time="2025-08-13T00:03:41.011360116Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:03:41.011451 env[1336]: time="2025-08-13T00:03:41.011437284Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:03:41.013795 env[1336]: time="2025-08-13T00:03:41.013764519Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:03:41.013795 env[1336]: time="2025-08-13T00:03:41.013791055Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:03:41.013899 env[1336]: time="2025-08-13T00:03:41.013809117Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:03:41.013899 env[1336]: time="2025-08-13T00:03:41.013820236Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:03:41.230196 env[1336]: time="2025-08-13T00:03:41.230153749Z" level=info msg="Loading containers: start." Aug 13 00:03:41.374612 kernel: Initializing XFRM netlink socket Aug 13 00:03:41.401672 env[1336]: time="2025-08-13T00:03:41.401632765Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:03:41.463935 systemd-networkd[1047]: docker0: Link UP Aug 13 00:03:41.481008 env[1336]: time="2025-08-13T00:03:41.480947255Z" level=info msg="Loading containers: done." Aug 13 00:03:41.500021 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2514865090-merged.mount: Deactivated successfully. Aug 13 00:03:41.503368 env[1336]: time="2025-08-13T00:03:41.503323520Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:03:41.503764 env[1336]: time="2025-08-13T00:03:41.503739657Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:03:41.503962 env[1336]: time="2025-08-13T00:03:41.503944088Z" level=info msg="Daemon has completed initialization" Aug 13 00:03:41.518246 systemd[1]: Started docker.service. Aug 13 00:03:41.522193 env[1336]: time="2025-08-13T00:03:41.522137198Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:03:42.249024 env[1223]: time="2025-08-13T00:03:42.248957831Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:03:42.927483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463712981.mount: Deactivated successfully. Aug 13 00:03:44.305254 env[1223]: time="2025-08-13T00:03:44.305187522Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:44.309732 env[1223]: time="2025-08-13T00:03:44.309678449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:44.312871 env[1223]: time="2025-08-13T00:03:44.312607353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:44.315402 env[1223]: time="2025-08-13T00:03:44.315357066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:44.316458 env[1223]: time="2025-08-13T00:03:44.316275558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 13 00:03:44.318034 env[1223]: time="2025-08-13T00:03:44.317997962Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:03:45.809719 env[1223]: time="2025-08-13T00:03:45.809659266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:45.812079 env[1223]: time="2025-08-13T00:03:45.811994845Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:45.814678 env[1223]: time="2025-08-13T00:03:45.814607736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:45.817124 env[1223]: time="2025-08-13T00:03:45.816638965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:45.817479 env[1223]: time="2025-08-13T00:03:45.817443089Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 13 00:03:45.818399 env[1223]: time="2025-08-13T00:03:45.818362827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:03:47.162159 env[1223]: time="2025-08-13T00:03:47.162094465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.164139 env[1223]: time="2025-08-13T00:03:47.164089536Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.167265 env[1223]: time="2025-08-13T00:03:47.167231626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.169278 env[1223]: time="2025-08-13T00:03:47.169244885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:47.170261 env[1223]: time="2025-08-13T00:03:47.170226669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 13 00:03:47.170908 env[1223]: time="2025-08-13T00:03:47.170858335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:03:48.220272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215619226.mount: Deactivated successfully. Aug 13 00:03:48.834696 env[1223]: time="2025-08-13T00:03:48.834641853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:48.835863 env[1223]: time="2025-08-13T00:03:48.835833402Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:48.837536 env[1223]: time="2025-08-13T00:03:48.837491738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:48.838642 env[1223]: time="2025-08-13T00:03:48.838611033Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:48.839063 env[1223]: time="2025-08-13T00:03:48.839029031Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 13 00:03:48.839979 env[1223]: time="2025-08-13T00:03:48.839940968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:03:49.467557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262977083.mount: Deactivated successfully. Aug 13 00:03:49.972153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:03:49.972348 systemd[1]: Stopped kubelet.service. Aug 13 00:03:49.974076 systemd[1]: Starting kubelet.service... Aug 13 00:03:50.087121 systemd[1]: Started kubelet.service. Aug 13 00:03:50.259186 kubelet[1471]: E0813 00:03:50.259054 1471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:03:50.262032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:03:50.262173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:03:50.424676 env[1223]: time="2025-08-13T00:03:50.424624408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.432491 env[1223]: time="2025-08-13T00:03:50.432445615Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.436790 env[1223]: time="2025-08-13T00:03:50.436740934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.438814 env[1223]: time="2025-08-13T00:03:50.438777209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.439766 env[1223]: time="2025-08-13T00:03:50.439730907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:03:50.440338 env[1223]: time="2025-08-13T00:03:50.440299143Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:03:50.904281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790446443.mount: Deactivated successfully. Aug 13 00:03:50.919939 env[1223]: time="2025-08-13T00:03:50.919839451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.927346 env[1223]: time="2025-08-13T00:03:50.927292366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.931505 env[1223]: time="2025-08-13T00:03:50.931460382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.933554 env[1223]: time="2025-08-13T00:03:50.933522231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:50.934102 env[1223]: time="2025-08-13T00:03:50.934068771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:03:50.934910 env[1223]: time="2025-08-13T00:03:50.934885632Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:03:51.479115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359575303.mount: Deactivated successfully. Aug 13 00:03:53.932162 env[1223]: time="2025-08-13T00:03:53.932100897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.934279 env[1223]: time="2025-08-13T00:03:53.934237620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.936740 env[1223]: time="2025-08-13T00:03:53.936695564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.939298 env[1223]: time="2025-08-13T00:03:53.939251160Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:53.940271 env[1223]: time="2025-08-13T00:03:53.940230850Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 13 00:03:57.967611 systemd[1]: Stopped kubelet.service. Aug 13 00:03:57.970202 systemd[1]: Starting kubelet.service... Aug 13 00:03:57.998907 systemd[1]: Reloading. Aug 13 00:03:58.066160 /usr/lib/systemd/system-generators/torcx-generator[1528]: time="2025-08-13T00:03:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:03:58.066191 /usr/lib/systemd/system-generators/torcx-generator[1528]: time="2025-08-13T00:03:58Z" level=info msg="torcx already run" Aug 13 00:03:58.143533 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:03:58.143758 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:03:58.159224 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:03:58.226360 systemd[1]: Started kubelet.service. Aug 13 00:03:58.228041 systemd[1]: Stopping kubelet.service... Aug 13 00:03:58.228478 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:03:58.228783 systemd[1]: Stopped kubelet.service. Aug 13 00:03:58.230423 systemd[1]: Starting kubelet.service... Aug 13 00:03:58.323892 systemd[1]: Started kubelet.service. Aug 13 00:03:58.361017 kubelet[1572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:58.361017 kubelet[1572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:03:58.361017 kubelet[1572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:58.361431 kubelet[1572]: I0813 00:03:58.361083 1572 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:03:59.058325 kubelet[1572]: I0813 00:03:59.058285 1572 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:03:59.058524 kubelet[1572]: I0813 00:03:59.058510 1572 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:03:59.058970 kubelet[1572]: I0813 00:03:59.058949 1572 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:03:59.106468 kubelet[1572]: I0813 00:03:59.106418 1572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:03:59.107150 kubelet[1572]: E0813 00:03:59.107114 1572 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:03:59.114629 kubelet[1572]: E0813 00:03:59.114575 1572 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:03:59.114832 kubelet[1572]: I0813 00:03:59.114813 1572 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:03:59.117888 kubelet[1572]: I0813 00:03:59.117854 1572 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:03:59.118944 kubelet[1572]: I0813 00:03:59.118885 1572 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:03:59.119291 kubelet[1572]: I0813 00:03:59.119060 1572 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:03:59.119515 kubelet[1572]: I0813 00:03:59.119500 1572 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:03:59.119596 kubelet[1572]: I0813 00:03:59.119584 1572 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:03:59.119887 kubelet[1572]: I0813 00:03:59.119870 1572 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:59.122982 kubelet[1572]: I0813 00:03:59.122950 1572 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:03:59.123161 kubelet[1572]: I0813 00:03:59.123147 1572 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:03:59.123303 kubelet[1572]: I0813 00:03:59.123290 1572 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:03:59.123376 kubelet[1572]: I0813 00:03:59.123365 1572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:03:59.146945 kubelet[1572]: I0813 00:03:59.146914 1572 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:03:59.147401 kubelet[1572]: W0813 00:03:59.147116 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:03:59.147401 kubelet[1572]: E0813 00:03:59.147339 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:03:59.147882 kubelet[1572]: I0813 00:03:59.147862 1572 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:03:59.148006 kubelet[1572]: W0813 00:03:59.147990 1572 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:03:59.148107 kubelet[1572]: W0813 00:03:59.148071 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:03:59.148143 kubelet[1572]: E0813 00:03:59.148125 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:03:59.149343 kubelet[1572]: I0813 00:03:59.149317 1572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:03:59.149463 kubelet[1572]: I0813 00:03:59.149451 1572 server.go:1287] "Started kubelet" Aug 13 00:03:59.150686 kubelet[1572]: I0813 00:03:59.150228 1572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:03:59.150686 kubelet[1572]: I0813 00:03:59.150554 1572 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:03:59.150686 kubelet[1572]: I0813 00:03:59.150636 1572 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:03:59.152679 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:03:59.152779 kubelet[1572]: I0813 00:03:59.151500 1572 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:03:59.153629 kubelet[1572]: I0813 00:03:59.153609 1572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:03:59.155061 kubelet[1572]: I0813 00:03:59.154914 1572 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:03:59.155061 kubelet[1572]: I0813 00:03:59.155027 1572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:03:59.155173 kubelet[1572]: I0813 00:03:59.155077 1572 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:03:59.155246 kubelet[1572]: I0813 00:03:59.155217 1572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:03:59.155761 kubelet[1572]: W0813 00:03:59.155466 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:03:59.155761 kubelet[1572]: E0813 00:03:59.155517 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:03:59.155761 kubelet[1572]: E0813 00:03:59.155625 1572 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:03:59.156421 kubelet[1572]: E0813 00:03:59.156371 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Aug 13 00:03:59.156498 kubelet[1572]: E0813 00:03:59.156465 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:03:59.156747 kubelet[1572]: I0813 00:03:59.156727 1572 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:03:59.156858 kubelet[1572]: I0813 00:03:59.156836 1572 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:03:59.157765 kubelet[1572]: I0813 00:03:59.157739 1572 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:03:59.159847 kubelet[1572]: E0813 00:03:59.159545 1572 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2ab593e17be7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:03:59.149415399 +0000 UTC m=+0.821632237,LastTimestamp:2025-08-13 00:03:59.149415399 +0000 UTC m=+0.821632237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:03:59.169089 kubelet[1572]: I0813 00:03:59.169062 1572 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:03:59.169275 kubelet[1572]: I0813 00:03:59.169259 1572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:03:59.169361 kubelet[1572]: I0813 00:03:59.169347 1572 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:59.171315 kubelet[1572]: I0813 00:03:59.171289 1572 policy_none.go:49] "None policy: Start" Aug 13 00:03:59.171432 kubelet[1572]: I0813 00:03:59.171420 1572 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:03:59.171505 kubelet[1572]: I0813 00:03:59.171495 1572 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:03:59.177550 systemd[1]: Created slice kubepods.slice. Aug 13 00:03:59.178461 kubelet[1572]: I0813 00:03:59.178410 1572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:03:59.180218 kubelet[1572]: I0813 00:03:59.180053 1572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:03:59.180218 kubelet[1572]: I0813 00:03:59.180079 1572 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:03:59.180218 kubelet[1572]: I0813 00:03:59.180099 1572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:03:59.180218 kubelet[1572]: I0813 00:03:59.180106 1572 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:03:59.180218 kubelet[1572]: E0813 00:03:59.180152 1572 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:03:59.181146 kubelet[1572]: W0813 00:03:59.181054 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:03:59.181146 kubelet[1572]: E0813 00:03:59.181106 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:03:59.183058 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:03:59.188085 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:03:59.199647 kubelet[1572]: I0813 00:03:59.199607 1572 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:03:59.199817 kubelet[1572]: I0813 00:03:59.199788 1572 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:03:59.199855 kubelet[1572]: I0813 00:03:59.199800 1572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:03:59.200255 kubelet[1572]: I0813 00:03:59.200011 1572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:03:59.200728 kubelet[1572]: E0813 00:03:59.200705 1572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:03:59.205781 kubelet[1572]: E0813 00:03:59.205754 1572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:03:59.289553 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 13 00:03:59.300475 kubelet[1572]: E0813 00:03:59.300324 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:03:59.301585 kubelet[1572]: I0813 00:03:59.301530 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:03:59.302425 kubelet[1572]: E0813 00:03:59.302389 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Aug 13 00:03:59.302742 systemd[1]: Created slice kubepods-burstable-pod08babcbb53d5075fee2df4e592c2a996.slice. Aug 13 00:03:59.304226 kubelet[1572]: E0813 00:03:59.304206 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:03:59.306393 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 13 00:03:59.307827 kubelet[1572]: E0813 00:03:59.307804 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:03:59.356382 kubelet[1572]: I0813 00:03:59.356260 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:03:59.357123 kubelet[1572]: E0813 00:03:59.357011 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Aug 13 00:03:59.456817 kubelet[1572]: I0813 00:03:59.456767 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08babcbb53d5075fee2df4e592c2a996-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"08babcbb53d5075fee2df4e592c2a996\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:03:59.456817 kubelet[1572]: I0813 00:03:59.456820 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08babcbb53d5075fee2df4e592c2a996-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"08babcbb53d5075fee2df4e592c2a996\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:03:59.457192 kubelet[1572]: I0813 00:03:59.456845 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08babcbb53d5075fee2df4e592c2a996-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"08babcbb53d5075fee2df4e592c2a996\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:03:59.457192 kubelet[1572]: I0813 00:03:59.456868 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:03:59.457192 kubelet[1572]: I0813 00:03:59.456886 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:03:59.457192 kubelet[1572]: I0813 00:03:59.456912 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:03:59.457192 kubelet[1572]: I0813 00:03:59.456951 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:03:59.457317 kubelet[1572]: I0813 00:03:59.456974 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:03:59.504119 kubelet[1572]: I0813 00:03:59.504067 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:03:59.504486 kubelet[1572]: E0813 00:03:59.504457 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Aug 13 00:03:59.601431 kubelet[1572]: E0813 00:03:59.601396 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:59.602195 env[1223]: time="2025-08-13T00:03:59.602061442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:59.606460 kubelet[1572]: E0813 00:03:59.606367 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:59.607107 env[1223]: time="2025-08-13T00:03:59.607059449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:08babcbb53d5075fee2df4e592c2a996,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:59.608297 kubelet[1572]: E0813 00:03:59.608264 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:03:59.608707 env[1223]: time="2025-08-13T00:03:59.608668039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:59.757772 kubelet[1572]: E0813 00:03:59.757727 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Aug 13 00:03:59.905720 kubelet[1572]: I0813 00:03:59.905603 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:03:59.906316 kubelet[1572]: E0813 00:03:59.906268 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Aug 13 00:04:00.115119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364529789.mount: Deactivated successfully. Aug 13 00:04:00.121006 env[1223]: time="2025-08-13T00:04:00.120962910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.123087 env[1223]: time="2025-08-13T00:04:00.123055811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.124004 env[1223]: time="2025-08-13T00:04:00.123978957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.125230 env[1223]: time="2025-08-13T00:04:00.125191424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.126506 env[1223]: time="2025-08-13T00:04:00.126474757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.127205 env[1223]: time="2025-08-13T00:04:00.127163086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.127874 env[1223]: time="2025-08-13T00:04:00.127845359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.130122 env[1223]: time="2025-08-13T00:04:00.130087652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.132452 env[1223]: time="2025-08-13T00:04:00.132399047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.136165 env[1223]: time="2025-08-13T00:04:00.136112688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.138076 env[1223]: time="2025-08-13T00:04:00.138025756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.138942 env[1223]: time="2025-08-13T00:04:00.138919986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:00.170715 env[1223]: time="2025-08-13T00:04:00.170430083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:00.170715 env[1223]: time="2025-08-13T00:04:00.170467903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:00.170715 env[1223]: time="2025-08-13T00:04:00.170478330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:00.171228 env[1223]: time="2025-08-13T00:04:00.171127677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:00.171228 env[1223]: time="2025-08-13T00:04:00.171167983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:00.171228 env[1223]: time="2025-08-13T00:04:00.171178651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:00.171376 env[1223]: time="2025-08-13T00:04:00.171100245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f3daaf775fa65b66b6bc460d76cb121ca1c041335f1da0f32bf899fa59cd9d pid=1621 runtime=io.containerd.runc.v2 Aug 13 00:04:00.171468 env[1223]: time="2025-08-13T00:04:00.171424497Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/788dd66cb371e776e887375e7b264ea441add0be066aa13247e87045a2b3e63f pid=1622 runtime=io.containerd.runc.v2 Aug 13 00:04:00.173355 kubelet[1572]: E0813 00:04:00.173241 1572 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2ab593e17be7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:03:59.149415399 +0000 UTC m=+0.821632237,LastTimestamp:2025-08-13 00:03:59.149415399 +0000 UTC m=+0.821632237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:04:00.178628 env[1223]: time="2025-08-13T00:04:00.178416754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:00.178628 env[1223]: time="2025-08-13T00:04:00.178461472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:00.178628 env[1223]: time="2025-08-13T00:04:00.178472100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:00.178797 env[1223]: time="2025-08-13T00:04:00.178633845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97c626f6cc05e0d1f47b455038309286b552224b67ac79bbe6423e17602afc02 pid=1651 runtime=io.containerd.runc.v2 Aug 13 00:04:00.185366 systemd[1]: Started cri-containerd-11f3daaf775fa65b66b6bc460d76cb121ca1c041335f1da0f32bf899fa59cd9d.scope. Aug 13 00:04:00.186335 systemd[1]: Started cri-containerd-788dd66cb371e776e887375e7b264ea441add0be066aa13247e87045a2b3e63f.scope. Aug 13 00:04:00.226825 systemd[1]: Started cri-containerd-97c626f6cc05e0d1f47b455038309286b552224b67ac79bbe6423e17602afc02.scope. Aug 13 00:04:00.272517 env[1223]: time="2025-08-13T00:04:00.272470393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:08babcbb53d5075fee2df4e592c2a996,Namespace:kube-system,Attempt:0,} returns sandbox id \"788dd66cb371e776e887375e7b264ea441add0be066aa13247e87045a2b3e63f\"" Aug 13 00:04:00.275308 kubelet[1572]: E0813 00:04:00.274934 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:00.277357 env[1223]: time="2025-08-13T00:04:00.277315528Z" level=info msg="CreateContainer within sandbox \"788dd66cb371e776e887375e7b264ea441add0be066aa13247e87045a2b3e63f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:04:00.279868 env[1223]: time="2025-08-13T00:04:00.279835391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f3daaf775fa65b66b6bc460d76cb121ca1c041335f1da0f32bf899fa59cd9d\"" Aug 13 00:04:00.280807 kubelet[1572]: E0813 00:04:00.280785 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:00.283120 env[1223]: time="2025-08-13T00:04:00.283070293Z" level=info msg="CreateContainer within sandbox \"11f3daaf775fa65b66b6bc460d76cb121ca1c041335f1da0f32bf899fa59cd9d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:04:00.290269 env[1223]: time="2025-08-13T00:04:00.290219362Z" level=info msg="CreateContainer within sandbox \"788dd66cb371e776e887375e7b264ea441add0be066aa13247e87045a2b3e63f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"206e79a052d25a3591df37ce6b70ea639a304849602d5e73acd67c8a1ff21fb2\"" Aug 13 00:04:00.290903 env[1223]: time="2025-08-13T00:04:00.290870434Z" level=info msg="StartContainer for \"206e79a052d25a3591df37ce6b70ea639a304849602d5e73acd67c8a1ff21fb2\"" Aug 13 00:04:00.291148 env[1223]: time="2025-08-13T00:04:00.290932998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"97c626f6cc05e0d1f47b455038309286b552224b67ac79bbe6423e17602afc02\"" Aug 13 00:04:00.291790 kubelet[1572]: W0813 00:04:00.291731 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:04:00.291859 kubelet[1572]: E0813 00:04:00.291797 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:04:00.292172 kubelet[1572]: E0813 00:04:00.292144 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:00.293927 env[1223]: time="2025-08-13T00:04:00.293884034Z" level=info msg="CreateContainer within sandbox \"97c626f6cc05e0d1f47b455038309286b552224b67ac79bbe6423e17602afc02\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:04:00.297475 env[1223]: time="2025-08-13T00:04:00.297428791Z" level=info msg="CreateContainer within sandbox \"11f3daaf775fa65b66b6bc460d76cb121ca1c041335f1da0f32bf899fa59cd9d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb002b6fde4b18a778c453d87b7bcf4d92b59e896c6843201c351f847d6852d6\"" Aug 13 00:04:00.297871 env[1223]: time="2025-08-13T00:04:00.297837746Z" level=info msg="StartContainer for \"eb002b6fde4b18a778c453d87b7bcf4d92b59e896c6843201c351f847d6852d6\"" Aug 13 00:04:00.308739 env[1223]: time="2025-08-13T00:04:00.308681326Z" level=info msg="CreateContainer within sandbox \"97c626f6cc05e0d1f47b455038309286b552224b67ac79bbe6423e17602afc02\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f77394fe0ac5cadcc36e45f7898c1046036b4f4b9061a690656b5ac954ac0f6d\"" Aug 13 00:04:00.309207 env[1223]: time="2025-08-13T00:04:00.309176427Z" level=info msg="StartContainer for \"f77394fe0ac5cadcc36e45f7898c1046036b4f4b9061a690656b5ac954ac0f6d\"" Aug 13 00:04:00.317432 systemd[1]: Started cri-containerd-206e79a052d25a3591df37ce6b70ea639a304849602d5e73acd67c8a1ff21fb2.scope. Aug 13 00:04:00.334788 systemd[1]: Started cri-containerd-eb002b6fde4b18a778c453d87b7bcf4d92b59e896c6843201c351f847d6852d6.scope. Aug 13 00:04:00.336609 systemd[1]: Started cri-containerd-f77394fe0ac5cadcc36e45f7898c1046036b4f4b9061a690656b5ac954ac0f6d.scope. Aug 13 00:04:00.397985 env[1223]: time="2025-08-13T00:04:00.397939641Z" level=info msg="StartContainer for \"eb002b6fde4b18a778c453d87b7bcf4d92b59e896c6843201c351f847d6852d6\" returns successfully" Aug 13 00:04:00.403778 env[1223]: time="2025-08-13T00:04:00.403734431Z" level=info msg="StartContainer for \"f77394fe0ac5cadcc36e45f7898c1046036b4f4b9061a690656b5ac954ac0f6d\" returns successfully" Aug 13 00:04:00.426750 env[1223]: time="2025-08-13T00:04:00.426647854Z" level=info msg="StartContainer for \"206e79a052d25a3591df37ce6b70ea639a304849602d5e73acd67c8a1ff21fb2\" returns successfully" Aug 13 00:04:00.433593 kubelet[1572]: W0813 00:04:00.431265 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:04:00.433593 kubelet[1572]: E0813 00:04:00.431345 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:04:00.477038 kubelet[1572]: W0813 00:04:00.476980 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:04:00.477451 kubelet[1572]: E0813 00:04:00.477424 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:04:00.545398 kubelet[1572]: W0813 00:04:00.545338 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Aug 13 00:04:00.545624 kubelet[1572]: E0813 00:04:00.545602 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:04:00.558420 kubelet[1572]: E0813 00:04:00.558375 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Aug 13 00:04:00.707793 kubelet[1572]: I0813 00:04:00.707702 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:04:01.186711 kubelet[1572]: E0813 00:04:01.186672 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:04:01.186855 kubelet[1572]: E0813 00:04:01.186805 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:01.189036 kubelet[1572]: E0813 00:04:01.189007 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:04:01.189137 kubelet[1572]: E0813 00:04:01.189121 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:01.190763 kubelet[1572]: E0813 00:04:01.190743 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:04:01.191003 kubelet[1572]: E0813 00:04:01.190987 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:02.193248 kubelet[1572]: E0813 00:04:02.193210 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:04:02.193594 kubelet[1572]: E0813 00:04:02.193336 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:02.193934 kubelet[1572]: E0813 00:04:02.193911 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 13 00:04:02.195265 kubelet[1572]: E0813 00:04:02.195229 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:02.642236 kubelet[1572]: E0813 00:04:02.642190 1572 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:04:02.761474 kubelet[1572]: I0813 00:04:02.761438 1572 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:04:02.857625 kubelet[1572]: I0813 00:04:02.857548 1572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:02.864108 kubelet[1572]: E0813 00:04:02.864075 1572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:02.864108 kubelet[1572]: I0813 00:04:02.864106 1572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:04:02.865917 kubelet[1572]: E0813 00:04:02.865890 1572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 00:04:02.865917 kubelet[1572]: I0813 00:04:02.865914 1572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:02.867504 kubelet[1572]: E0813 00:04:02.867471 1572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:03.126072 kubelet[1572]: I0813 00:04:03.126036 1572 apiserver.go:52] "Watching apiserver" Aug 13 00:04:03.156083 kubelet[1572]: I0813 00:04:03.156049 1572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:04:04.795723 systemd[1]: Reloading. Aug 13 00:04:04.853915 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2025-08-13T00:04:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:04:04.853950 /usr/lib/systemd/system-generators/torcx-generator[1869]: time="2025-08-13T00:04:04Z" level=info msg="torcx already run" Aug 13 00:04:04.916718 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:04:04.916738 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:04:04.932746 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:04:05.017046 systemd[1]: Stopping kubelet.service... Aug 13 00:04:05.041433 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:04:05.041667 systemd[1]: Stopped kubelet.service. Aug 13 00:04:05.041723 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Aug 13 00:04:05.043452 systemd[1]: Starting kubelet.service... Aug 13 00:04:05.143666 systemd[1]: Started kubelet.service. Aug 13 00:04:05.190330 kubelet[1911]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:04:05.190330 kubelet[1911]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:04:05.190330 kubelet[1911]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:04:05.192433 kubelet[1911]: I0813 00:04:05.192396 1911 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:04:05.209521 kubelet[1911]: I0813 00:04:05.209486 1911 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:04:05.209675 kubelet[1911]: I0813 00:04:05.209662 1911 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:04:05.210071 kubelet[1911]: I0813 00:04:05.210053 1911 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:04:05.211508 kubelet[1911]: I0813 00:04:05.211483 1911 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:04:05.214637 kubelet[1911]: I0813 00:04:05.214597 1911 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:04:05.219680 kubelet[1911]: E0813 00:04:05.219627 1911 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:04:05.219819 kubelet[1911]: I0813 00:04:05.219803 1911 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:04:05.222746 kubelet[1911]: I0813 00:04:05.222721 1911 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:04:05.223066 kubelet[1911]: I0813 00:04:05.223039 1911 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:04:05.223417 kubelet[1911]: I0813 00:04:05.223140 1911 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:04:05.223574 kubelet[1911]: I0813 00:04:05.223546 1911 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:04:05.223648 kubelet[1911]: I0813 00:04:05.223638 1911 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:04:05.223751 kubelet[1911]: I0813 00:04:05.223740 1911 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:04:05.223969 kubelet[1911]: I0813 00:04:05.223954 1911 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:04:05.224086 kubelet[1911]: I0813 00:04:05.224060 1911 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:04:05.224398 kubelet[1911]: I0813 00:04:05.224380 1911 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:04:05.225750 kubelet[1911]: I0813 00:04:05.225709 1911 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:04:05.227255 kubelet[1911]: I0813 00:04:05.227216 1911 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:04:05.228031 kubelet[1911]: I0813 00:04:05.227961 1911 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:04:05.228968 kubelet[1911]: I0813 00:04:05.228947 1911 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:04:05.229068 kubelet[1911]: I0813 00:04:05.229057 1911 server.go:1287] "Started kubelet" Aug 13 00:04:05.230833 kubelet[1911]: I0813 00:04:05.230814 1911 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:04:05.233362 kubelet[1911]: I0813 00:04:05.233313 1911 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:04:05.234466 kubelet[1911]: I0813 00:04:05.234440 1911 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:04:05.239611 kubelet[1911]: I0813 00:04:05.235533 1911 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:04:05.239611 kubelet[1911]: I0813 00:04:05.235905 1911 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:04:05.239611 kubelet[1911]: I0813 00:04:05.236138 1911 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:04:05.243756 kubelet[1911]: I0813 00:04:05.243735 1911 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:04:05.244076 kubelet[1911]: E0813 00:04:05.244050 1911 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:04:05.244488 kubelet[1911]: I0813 00:04:05.244468 1911 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:04:05.245505 kubelet[1911]: I0813 00:04:05.244710 1911 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:04:05.246347 kubelet[1911]: I0813 00:04:05.246110 1911 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:04:05.246347 kubelet[1911]: I0813 00:04:05.246225 1911 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:04:05.247832 kubelet[1911]: I0813 00:04:05.247733 1911 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:04:05.251682 kubelet[1911]: E0813 00:04:05.251651 1911 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:04:05.268974 kubelet[1911]: I0813 00:04:05.268892 1911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:04:05.270699 kubelet[1911]: I0813 00:04:05.270665 1911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:04:05.271164 kubelet[1911]: I0813 00:04:05.271145 1911 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:04:05.271303 kubelet[1911]: I0813 00:04:05.271286 1911 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:04:05.271368 kubelet[1911]: I0813 00:04:05.271359 1911 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:04:05.271513 kubelet[1911]: E0813 00:04:05.271486 1911 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:04:05.285283 kubelet[1911]: I0813 00:04:05.285248 1911 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:04:05.285283 kubelet[1911]: I0813 00:04:05.285271 1911 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:04:05.285443 kubelet[1911]: I0813 00:04:05.285306 1911 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:04:05.285708 kubelet[1911]: I0813 00:04:05.285677 1911 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:04:05.285768 kubelet[1911]: I0813 00:04:05.285702 1911 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:04:05.285768 kubelet[1911]: I0813 00:04:05.285724 1911 policy_none.go:49] "None policy: Start" Aug 13 00:04:05.285768 kubelet[1911]: I0813 00:04:05.285733 1911 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:04:05.285768 kubelet[1911]: I0813 00:04:05.285743 1911 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:04:05.285869 kubelet[1911]: I0813 00:04:05.285849 1911 state_mem.go:75] "Updated machine memory state" Aug 13 00:04:05.289732 kubelet[1911]: I0813 00:04:05.289696 1911 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:04:05.290044 kubelet[1911]: I0813 00:04:05.289900 1911 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:04:05.290044 kubelet[1911]: I0813 00:04:05.289919 1911 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:04:05.290798 kubelet[1911]: I0813 00:04:05.290194 1911 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:04:05.293035 kubelet[1911]: E0813 00:04:05.292999 1911 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:04:05.372917 kubelet[1911]: I0813 00:04:05.372864 1911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:05.372917 kubelet[1911]: I0813 00:04:05.372907 1911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:04:05.373586 kubelet[1911]: I0813 00:04:05.373545 1911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:05.394860 kubelet[1911]: I0813 00:04:05.394752 1911 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 13 00:04:05.409381 kubelet[1911]: I0813 00:04:05.409347 1911 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 13 00:04:05.409699 kubelet[1911]: I0813 00:04:05.409645 1911 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 13 00:04:05.447110 kubelet[1911]: I0813 00:04:05.447056 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:04:05.447110 kubelet[1911]: I0813 00:04:05.447111 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08babcbb53d5075fee2df4e592c2a996-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"08babcbb53d5075fee2df4e592c2a996\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:05.447325 kubelet[1911]: I0813 00:04:05.447134 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08babcbb53d5075fee2df4e592c2a996-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"08babcbb53d5075fee2df4e592c2a996\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:05.447325 kubelet[1911]: I0813 00:04:05.447188 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08babcbb53d5075fee2df4e592c2a996-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"08babcbb53d5075fee2df4e592c2a996\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:05.447325 kubelet[1911]: I0813 00:04:05.447219 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:05.447325 kubelet[1911]: I0813 00:04:05.447240 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:05.447325 kubelet[1911]: I0813 00:04:05.447257 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:05.447457 kubelet[1911]: I0813 00:04:05.447272 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:05.447457 kubelet[1911]: I0813 00:04:05.447286 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:04:05.685318 kubelet[1911]: E0813 00:04:05.685200 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:05.687162 kubelet[1911]: E0813 00:04:05.687061 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:05.687162 kubelet[1911]: E0813 00:04:05.687133 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:05.807029 sudo[1947]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:04:05.807274 sudo[1947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:04:06.226461 kubelet[1911]: I0813 00:04:06.226414 1911 apiserver.go:52] "Watching apiserver" Aug 13 00:04:06.245232 kubelet[1911]: I0813 00:04:06.245192 1911 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:04:06.285151 kubelet[1911]: I0813 00:04:06.285115 1911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:06.285440 kubelet[1911]: E0813 00:04:06.285338 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:06.285440 kubelet[1911]: I0813 00:04:06.285342 1911 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 13 00:04:06.300155 kubelet[1911]: E0813 00:04:06.300086 1911 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:04:06.300405 kubelet[1911]: E0813 00:04:06.300277 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:06.301165 kubelet[1911]: E0813 00:04:06.301138 1911 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:04:06.301441 kubelet[1911]: E0813 00:04:06.301423 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:06.327672 kubelet[1911]: I0813 00:04:06.327611 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.327584405 podStartE2EDuration="1.327584405s" podCreationTimestamp="2025-08-13 00:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:04:06.316599869 +0000 UTC m=+1.168460724" watchObservedRunningTime="2025-08-13 00:04:06.327584405 +0000 UTC m=+1.179445220" Aug 13 00:04:06.327993 kubelet[1911]: I0813 00:04:06.327962 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.327954442 podStartE2EDuration="1.327954442s" podCreationTimestamp="2025-08-13 00:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:04:06.326015792 +0000 UTC m=+1.177876647" watchObservedRunningTime="2025-08-13 00:04:06.327954442 +0000 UTC m=+1.179815257" Aug 13 00:04:06.340552 sudo[1947]: pam_unix(sudo:session): session closed for user root Aug 13 00:04:06.352009 kubelet[1911]: I0813 00:04:06.351949 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.351928321 podStartE2EDuration="1.351928321s" podCreationTimestamp="2025-08-13 00:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:04:06.340933614 +0000 UTC m=+1.192794469" watchObservedRunningTime="2025-08-13 00:04:06.351928321 +0000 UTC m=+1.203789176" Aug 13 00:04:07.287588 kubelet[1911]: E0813 00:04:07.287541 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:07.288172 kubelet[1911]: E0813 00:04:07.288149 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:08.154028 sudo[1324]: pam_unix(sudo:session): session closed for user root Aug 13 00:04:08.156049 sshd[1320]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:08.158813 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:41848.service: Deactivated successfully. Aug 13 00:04:08.159649 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:04:08.159827 systemd[1]: session-5.scope: Consumed 6.282s CPU time. Aug 13 00:04:08.160476 systemd-logind[1211]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:04:08.161274 systemd-logind[1211]: Removed session 5. Aug 13 00:04:08.288656 kubelet[1911]: E0813 00:04:08.288610 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:08.409179 kubelet[1911]: E0813 00:04:08.409059 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:09.970613 kubelet[1911]: E0813 00:04:09.970558 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:10.540782 kubelet[1911]: I0813 00:04:10.540746 1911 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:04:10.541059 env[1223]: time="2025-08-13T00:04:10.541025191Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:04:10.541501 kubelet[1911]: I0813 00:04:10.541471 1911 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:04:11.086765 systemd[1]: Created slice kubepods-besteffort-pod4e837856_5764_4858_8aa1_888c0ef9f058.slice. Aug 13 00:04:11.108445 systemd[1]: Created slice kubepods-burstable-podb2c1c4f3_456c_45e3_b0db_ebd05ad6d13d.slice. Aug 13 00:04:11.111849 kubelet[1911]: W0813 00:04:11.111792 1911 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 13 00:04:11.111849 kubelet[1911]: E0813 00:04:11.111847 1911 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 00:04:11.112214 kubelet[1911]: W0813 00:04:11.111910 1911 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 13 00:04:11.112214 kubelet[1911]: E0813 00:04:11.111922 1911 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 00:04:11.195259 kubelet[1911]: I0813 00:04:11.195206 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-bpf-maps\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195259 kubelet[1911]: I0813 00:04:11.195260 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hubble-tls\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195465 kubelet[1911]: I0813 00:04:11.195281 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e837856-5764-4858-8aa1-888c0ef9f058-kube-proxy\") pod \"kube-proxy-prm9g\" (UID: \"4e837856-5764-4858-8aa1-888c0ef9f058\") " pod="kube-system/kube-proxy-prm9g" Aug 13 00:04:11.195465 kubelet[1911]: I0813 00:04:11.195300 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-etc-cni-netd\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195465 kubelet[1911]: I0813 00:04:11.195321 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-lib-modules\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195465 kubelet[1911]: I0813 00:04:11.195336 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-cgroup\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195465 kubelet[1911]: I0813 00:04:11.195350 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-xtables-lock\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195465 kubelet[1911]: I0813 00:04:11.195366 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-kernel\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195641 kubelet[1911]: I0813 00:04:11.195382 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7lfl\" (UniqueName: \"kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-kube-api-access-p7lfl\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195641 kubelet[1911]: I0813 00:04:11.195420 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5hx\" (UniqueName: \"kubernetes.io/projected/4e837856-5764-4858-8aa1-888c0ef9f058-kube-api-access-cg5hx\") pod \"kube-proxy-prm9g\" (UID: \"4e837856-5764-4858-8aa1-888c0ef9f058\") " pod="kube-system/kube-proxy-prm9g" Aug 13 00:04:11.195641 kubelet[1911]: I0813 00:04:11.195461 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e837856-5764-4858-8aa1-888c0ef9f058-xtables-lock\") pod \"kube-proxy-prm9g\" (UID: \"4e837856-5764-4858-8aa1-888c0ef9f058\") " pod="kube-system/kube-proxy-prm9g" Aug 13 00:04:11.195641 kubelet[1911]: I0813 00:04:11.195484 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hostproc\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195641 kubelet[1911]: I0813 00:04:11.195518 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cni-path\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195764 kubelet[1911]: I0813 00:04:11.195539 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-clustermesh-secrets\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195764 kubelet[1911]: I0813 00:04:11.195577 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e837856-5764-4858-8aa1-888c0ef9f058-lib-modules\") pod \"kube-proxy-prm9g\" (UID: \"4e837856-5764-4858-8aa1-888c0ef9f058\") " pod="kube-system/kube-proxy-prm9g" Aug 13 00:04:11.195764 kubelet[1911]: I0813 00:04:11.195594 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-run\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195764 kubelet[1911]: I0813 00:04:11.195609 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-config-path\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.195764 kubelet[1911]: I0813 00:04:11.195625 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-net\") pod \"cilium-jwn9j\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " pod="kube-system/cilium-jwn9j" Aug 13 00:04:11.298024 kubelet[1911]: I0813 00:04:11.297981 1911 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:04:11.397188 kubelet[1911]: E0813 00:04:11.397029 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:11.397720 env[1223]: time="2025-08-13T00:04:11.397673739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prm9g,Uid:4e837856-5764-4858-8aa1-888c0ef9f058,Namespace:kube-system,Attempt:0,}" Aug 13 00:04:11.557672 env[1223]: time="2025-08-13T00:04:11.557594336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:11.557672 env[1223]: time="2025-08-13T00:04:11.557640225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:11.557672 env[1223]: time="2025-08-13T00:04:11.557650957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:11.558012 env[1223]: time="2025-08-13T00:04:11.557773567Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ebbbb4dfedd6ff6ff6405a722eb697ba2a2f1330e0129eaea0f2874b6a1641 pid=2006 runtime=io.containerd.runc.v2 Aug 13 00:04:11.568114 systemd[1]: Started cri-containerd-10ebbbb4dfedd6ff6ff6405a722eb697ba2a2f1330e0129eaea0f2874b6a1641.scope. Aug 13 00:04:11.595952 env[1223]: time="2025-08-13T00:04:11.595905935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prm9g,Uid:4e837856-5764-4858-8aa1-888c0ef9f058,Namespace:kube-system,Attempt:0,} returns sandbox id \"10ebbbb4dfedd6ff6ff6405a722eb697ba2a2f1330e0129eaea0f2874b6a1641\"" Aug 13 00:04:11.596762 kubelet[1911]: E0813 00:04:11.596737 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:11.599352 env[1223]: time="2025-08-13T00:04:11.599318336Z" level=info msg="CreateContainer within sandbox \"10ebbbb4dfedd6ff6ff6405a722eb697ba2a2f1330e0129eaea0f2874b6a1641\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:04:11.608991 systemd[1]: Created slice kubepods-besteffort-pod8820c744_69cb_480a_92f1_d564216e814f.slice. Aug 13 00:04:11.698230 kubelet[1911]: I0813 00:04:11.698075 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnjf\" (UniqueName: \"kubernetes.io/projected/8820c744-69cb-480a-92f1-d564216e814f-kube-api-access-psnjf\") pod \"cilium-operator-6c4d7847fc-dmz98\" (UID: \"8820c744-69cb-480a-92f1-d564216e814f\") " pod="kube-system/cilium-operator-6c4d7847fc-dmz98" Aug 13 00:04:11.698230 kubelet[1911]: I0813 00:04:11.698122 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8820c744-69cb-480a-92f1-d564216e814f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dmz98\" (UID: \"8820c744-69cb-480a-92f1-d564216e814f\") " pod="kube-system/cilium-operator-6c4d7847fc-dmz98" Aug 13 00:04:11.798249 env[1223]: time="2025-08-13T00:04:11.796503536Z" level=info msg="CreateContainer within sandbox \"10ebbbb4dfedd6ff6ff6405a722eb697ba2a2f1330e0129eaea0f2874b6a1641\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fcea5ad15db021e2803569ae50a92c4a3ab2268eca78001e569f239fb52e6431\"" Aug 13 00:04:11.798249 env[1223]: time="2025-08-13T00:04:11.797472369Z" level=info msg="StartContainer for \"fcea5ad15db021e2803569ae50a92c4a3ab2268eca78001e569f239fb52e6431\"" Aug 13 00:04:11.812748 systemd[1]: Started cri-containerd-fcea5ad15db021e2803569ae50a92c4a3ab2268eca78001e569f239fb52e6431.scope. Aug 13 00:04:11.878731 env[1223]: time="2025-08-13T00:04:11.878657315Z" level=info msg="StartContainer for \"fcea5ad15db021e2803569ae50a92c4a3ab2268eca78001e569f239fb52e6431\" returns successfully" Aug 13 00:04:12.212556 kubelet[1911]: E0813 00:04:12.212525 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:12.213401 env[1223]: time="2025-08-13T00:04:12.213355015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dmz98,Uid:8820c744-69cb-480a-92f1-d564216e814f,Namespace:kube-system,Attempt:0,}" Aug 13 00:04:12.297233 kubelet[1911]: E0813 00:04:12.297154 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:12.299259 kubelet[1911]: E0813 00:04:12.299231 1911 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Aug 13 00:04:12.299259 kubelet[1911]: E0813 00:04:12.299254 1911 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-jwn9j: failed to sync secret cache: timed out waiting for the condition Aug 13 00:04:12.299400 kubelet[1911]: E0813 00:04:12.299320 1911 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hubble-tls podName:b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d nodeName:}" failed. No retries permitted until 2025-08-13 00:04:12.799297665 +0000 UTC m=+7.651158480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hubble-tls") pod "cilium-jwn9j" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:04:12.358386 kubelet[1911]: I0813 00:04:12.358314 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prm9g" podStartSLOduration=1.3582964290000001 podStartE2EDuration="1.358296429s" podCreationTimestamp="2025-08-13 00:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:04:12.35655195 +0000 UTC m=+7.208412805" watchObservedRunningTime="2025-08-13 00:04:12.358296429 +0000 UTC m=+7.210157244" Aug 13 00:04:12.396699 env[1223]: time="2025-08-13T00:04:12.396617265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:12.396699 env[1223]: time="2025-08-13T00:04:12.396660349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:12.396896 env[1223]: time="2025-08-13T00:04:12.396671520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:12.396942 env[1223]: time="2025-08-13T00:04:12.396924015Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085 pid=2211 runtime=io.containerd.runc.v2 Aug 13 00:04:12.432262 systemd[1]: Started cri-containerd-3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085.scope. Aug 13 00:04:12.474936 env[1223]: time="2025-08-13T00:04:12.474891504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dmz98,Uid:8820c744-69cb-480a-92f1-d564216e814f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085\"" Aug 13 00:04:12.475840 kubelet[1911]: E0813 00:04:12.475817 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:12.476936 env[1223]: time="2025-08-13T00:04:12.476897686Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:04:12.914733 kubelet[1911]: E0813 00:04:12.914622 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:12.915187 env[1223]: time="2025-08-13T00:04:12.915146663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwn9j,Uid:b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d,Namespace:kube-system,Attempt:0,}" Aug 13 00:04:13.047442 env[1223]: time="2025-08-13T00:04:13.047364910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:13.047633 env[1223]: time="2025-08-13T00:04:13.047607221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:13.047754 env[1223]: time="2025-08-13T00:04:13.047730699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:13.048918 env[1223]: time="2025-08-13T00:04:13.048003038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a pid=2252 runtime=io.containerd.runc.v2 Aug 13 00:04:13.061141 systemd[1]: Started cri-containerd-7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a.scope. Aug 13 00:04:13.131675 env[1223]: time="2025-08-13T00:04:13.131623699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jwn9j,Uid:b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\"" Aug 13 00:04:13.132603 kubelet[1911]: E0813 00:04:13.132495 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:13.585873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount828283932.mount: Deactivated successfully. Aug 13 00:04:14.247867 env[1223]: time="2025-08-13T00:04:14.247819024Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:14.249650 env[1223]: time="2025-08-13T00:04:14.249610840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:14.251056 env[1223]: time="2025-08-13T00:04:14.251028838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:14.251535 env[1223]: time="2025-08-13T00:04:14.251488812Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 00:04:14.253956 env[1223]: time="2025-08-13T00:04:14.252898203Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:04:14.254859 env[1223]: time="2025-08-13T00:04:14.254031424Z" level=info msg="CreateContainer within sandbox \"3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:04:14.263377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747495938.mount: Deactivated successfully. Aug 13 00:04:14.268510 env[1223]: time="2025-08-13T00:04:14.268449461Z" level=info msg="CreateContainer within sandbox \"3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\"" Aug 13 00:04:14.269018 env[1223]: time="2025-08-13T00:04:14.268997475Z" level=info msg="StartContainer for \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\"" Aug 13 00:04:14.284803 systemd[1]: Started cri-containerd-c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6.scope. Aug 13 00:04:14.332627 env[1223]: time="2025-08-13T00:04:14.332356350Z" level=info msg="StartContainer for \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\" returns successfully" Aug 13 00:04:15.306842 kubelet[1911]: E0813 00:04:15.306547 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:15.320596 kubelet[1911]: I0813 00:04:15.320428 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dmz98" podStartSLOduration=2.543988328 podStartE2EDuration="4.320409722s" podCreationTimestamp="2025-08-13 00:04:11 +0000 UTC" firstStartedPulling="2025-08-13 00:04:12.4763557 +0000 UTC m=+7.328216555" lastFinishedPulling="2025-08-13 00:04:14.252777094 +0000 UTC m=+9.104637949" observedRunningTime="2025-08-13 00:04:15.318961007 +0000 UTC m=+10.170821862" watchObservedRunningTime="2025-08-13 00:04:15.320409722 +0000 UTC m=+10.172270577" Aug 13 00:04:16.310072 kubelet[1911]: E0813 00:04:16.308988 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:16.345490 kubelet[1911]: E0813 00:04:16.343833 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:18.421099 kubelet[1911]: E0813 00:04:18.420179 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:19.317361 kubelet[1911]: E0813 00:04:19.316111 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:19.980542 kubelet[1911]: E0813 00:04:19.978996 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:22.968666 update_engine[1216]: I0813 00:04:22.968624 1216 update_attempter.cc:509] Updating boot flags... Aug 13 00:04:23.345346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269036383.mount: Deactivated successfully. Aug 13 00:04:25.609068 env[1223]: time="2025-08-13T00:04:25.609004787Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:25.610547 env[1223]: time="2025-08-13T00:04:25.610512592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:25.612536 env[1223]: time="2025-08-13T00:04:25.612506243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:04:25.613127 env[1223]: time="2025-08-13T00:04:25.613096502Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 00:04:25.616189 env[1223]: time="2025-08-13T00:04:25.616153413Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:04:25.633991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918273178.mount: Deactivated successfully. Aug 13 00:04:25.639495 env[1223]: time="2025-08-13T00:04:25.639456073Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\"" Aug 13 00:04:25.640187 env[1223]: time="2025-08-13T00:04:25.640161471Z" level=info msg="StartContainer for \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\"" Aug 13 00:04:25.656976 systemd[1]: Started cri-containerd-f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2.scope. Aug 13 00:04:25.746847 env[1223]: time="2025-08-13T00:04:25.744729632Z" level=info msg="StartContainer for \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\" returns successfully" Aug 13 00:04:25.911355 systemd[1]: cri-containerd-f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2.scope: Deactivated successfully. Aug 13 00:04:25.951284 env[1223]: time="2025-08-13T00:04:25.951234540Z" level=info msg="shim disconnected" id=f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2 Aug 13 00:04:25.951284 env[1223]: time="2025-08-13T00:04:25.951287407Z" level=warning msg="cleaning up after shim disconnected" id=f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2 namespace=k8s.io Aug 13 00:04:25.951625 env[1223]: time="2025-08-13T00:04:25.951297332Z" level=info msg="cleaning up dead shim" Aug 13 00:04:25.959021 env[1223]: time="2025-08-13T00:04:25.958965221Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2392 runtime=io.containerd.runc.v2\n" Aug 13 00:04:26.354646 kubelet[1911]: E0813 00:04:26.354609 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:26.359155 env[1223]: time="2025-08-13T00:04:26.359059028Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:04:26.370695 env[1223]: time="2025-08-13T00:04:26.370627180Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\"" Aug 13 00:04:26.371441 env[1223]: time="2025-08-13T00:04:26.371412960Z" level=info msg="StartContainer for \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\"" Aug 13 00:04:26.386358 systemd[1]: Started cri-containerd-b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261.scope. Aug 13 00:04:26.420370 env[1223]: time="2025-08-13T00:04:26.420325084Z" level=info msg="StartContainer for \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\" returns successfully" Aug 13 00:04:26.445061 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:04:26.445283 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:04:26.445473 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:04:26.447296 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:04:26.455757 systemd[1]: cri-containerd-b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261.scope: Deactivated successfully. Aug 13 00:04:26.458376 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:04:26.483257 env[1223]: time="2025-08-13T00:04:26.483207401Z" level=info msg="shim disconnected" id=b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261 Aug 13 00:04:26.483257 env[1223]: time="2025-08-13T00:04:26.483250422Z" level=warning msg="cleaning up after shim disconnected" id=b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261 namespace=k8s.io Aug 13 00:04:26.483257 env[1223]: time="2025-08-13T00:04:26.483260147Z" level=info msg="cleaning up dead shim" Aug 13 00:04:26.490508 env[1223]: time="2025-08-13T00:04:26.490453024Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Aug 13 00:04:26.630714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2-rootfs.mount: Deactivated successfully. Aug 13 00:04:27.357443 kubelet[1911]: E0813 00:04:27.357411 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:27.363772 env[1223]: time="2025-08-13T00:04:27.363725559Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:04:27.393443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841852228.mount: Deactivated successfully. Aug 13 00:04:27.400498 env[1223]: time="2025-08-13T00:04:27.400434323Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\"" Aug 13 00:04:27.401384 env[1223]: time="2025-08-13T00:04:27.401348545Z" level=info msg="StartContainer for \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\"" Aug 13 00:04:27.420254 systemd[1]: Started cri-containerd-fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea.scope. Aug 13 00:04:27.451763 env[1223]: time="2025-08-13T00:04:27.451705001Z" level=info msg="StartContainer for \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\" returns successfully" Aug 13 00:04:27.473814 systemd[1]: cri-containerd-fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea.scope: Deactivated successfully. Aug 13 00:04:27.498428 env[1223]: time="2025-08-13T00:04:27.498371597Z" level=info msg="shim disconnected" id=fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea Aug 13 00:04:27.498428 env[1223]: time="2025-08-13T00:04:27.498414977Z" level=warning msg="cleaning up after shim disconnected" id=fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea namespace=k8s.io Aug 13 00:04:27.498428 env[1223]: time="2025-08-13T00:04:27.498424421Z" level=info msg="cleaning up dead shim" Aug 13 00:04:27.505988 env[1223]: time="2025-08-13T00:04:27.505938406Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2515 runtime=io.containerd.runc.v2\n" Aug 13 00:04:27.630324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea-rootfs.mount: Deactivated successfully. Aug 13 00:04:28.364518 kubelet[1911]: E0813 00:04:28.364478 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:28.367938 env[1223]: time="2025-08-13T00:04:28.367891882Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:04:28.392048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605301896.mount: Deactivated successfully. Aug 13 00:04:28.398956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571465088.mount: Deactivated successfully. Aug 13 00:04:28.405069 env[1223]: time="2025-08-13T00:04:28.405008296Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\"" Aug 13 00:04:28.405969 env[1223]: time="2025-08-13T00:04:28.405939026Z" level=info msg="StartContainer for \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\"" Aug 13 00:04:28.427190 systemd[1]: Started cri-containerd-eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8.scope. Aug 13 00:04:28.485938 systemd[1]: cri-containerd-eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8.scope: Deactivated successfully. Aug 13 00:04:28.489476 env[1223]: time="2025-08-13T00:04:28.489424327Z" level=info msg="StartContainer for \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\" returns successfully" Aug 13 00:04:28.519482 env[1223]: time="2025-08-13T00:04:28.519432093Z" level=info msg="shim disconnected" id=eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8 Aug 13 00:04:28.519750 env[1223]: time="2025-08-13T00:04:28.519729024Z" level=warning msg="cleaning up after shim disconnected" id=eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8 namespace=k8s.io Aug 13 00:04:28.519813 env[1223]: time="2025-08-13T00:04:28.519799615Z" level=info msg="cleaning up dead shim" Aug 13 00:04:28.528743 env[1223]: time="2025-08-13T00:04:28.528697451Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:04:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2570 runtime=io.containerd.runc.v2\n" Aug 13 00:04:29.372355 kubelet[1911]: E0813 00:04:29.372293 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:29.378985 env[1223]: time="2025-08-13T00:04:29.378938729Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:04:29.406846 env[1223]: time="2025-08-13T00:04:29.403311857Z" level=info msg="CreateContainer within sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\"" Aug 13 00:04:29.406846 env[1223]: time="2025-08-13T00:04:29.404213396Z" level=info msg="StartContainer for \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\"" Aug 13 00:04:29.427221 systemd[1]: Started cri-containerd-647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3.scope. Aug 13 00:04:29.523375 env[1223]: time="2025-08-13T00:04:29.523299145Z" level=info msg="StartContainer for \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\" returns successfully" Aug 13 00:04:29.630497 systemd[1]: run-containerd-runc-k8s.io-647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3-runc.xHkXyK.mount: Deactivated successfully. Aug 13 00:04:29.755161 kubelet[1911]: I0813 00:04:29.755130 1911 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:04:29.851243 systemd[1]: Created slice kubepods-burstable-pode1da2e5d_485d_4b1a_9585_e7a167f6c28c.slice. Aug 13 00:04:29.859425 systemd[1]: Created slice kubepods-burstable-podbee5d144_fb54_4beb_9617_3e75200fe6f4.slice. Aug 13 00:04:29.931389 kubelet[1911]: I0813 00:04:29.931279 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1da2e5d-485d-4b1a-9585-e7a167f6c28c-config-volume\") pod \"coredns-668d6bf9bc-8mvzb\" (UID: \"e1da2e5d-485d-4b1a-9585-e7a167f6c28c\") " pod="kube-system/coredns-668d6bf9bc-8mvzb" Aug 13 00:04:29.931389 kubelet[1911]: I0813 00:04:29.931326 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskhs\" (UniqueName: \"kubernetes.io/projected/bee5d144-fb54-4beb-9617-3e75200fe6f4-kube-api-access-tskhs\") pod \"coredns-668d6bf9bc-czdpb\" (UID: \"bee5d144-fb54-4beb-9617-3e75200fe6f4\") " pod="kube-system/coredns-668d6bf9bc-czdpb" Aug 13 00:04:29.931389 kubelet[1911]: I0813 00:04:29.931350 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hlv\" (UniqueName: \"kubernetes.io/projected/e1da2e5d-485d-4b1a-9585-e7a167f6c28c-kube-api-access-p6hlv\") pod \"coredns-668d6bf9bc-8mvzb\" (UID: \"e1da2e5d-485d-4b1a-9585-e7a167f6c28c\") " pod="kube-system/coredns-668d6bf9bc-8mvzb" Aug 13 00:04:29.931389 kubelet[1911]: I0813 00:04:29.931366 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee5d144-fb54-4beb-9617-3e75200fe6f4-config-volume\") pod \"coredns-668d6bf9bc-czdpb\" (UID: \"bee5d144-fb54-4beb-9617-3e75200fe6f4\") " pod="kube-system/coredns-668d6bf9bc-czdpb" Aug 13 00:04:30.157222 kubelet[1911]: E0813 00:04:30.157188 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:30.158203 env[1223]: time="2025-08-13T00:04:30.158162850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8mvzb,Uid:e1da2e5d-485d-4b1a-9585-e7a167f6c28c,Namespace:kube-system,Attempt:0,}" Aug 13 00:04:30.161902 kubelet[1911]: E0813 00:04:30.161856 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:30.162438 env[1223]: time="2025-08-13T00:04:30.162391069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-czdpb,Uid:bee5d144-fb54-4beb-9617-3e75200fe6f4,Namespace:kube-system,Attempt:0,}" Aug 13 00:04:30.357594 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:04:30.377843 kubelet[1911]: E0813 00:04:30.377811 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:30.401359 kubelet[1911]: I0813 00:04:30.401286 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jwn9j" podStartSLOduration=6.920874401 podStartE2EDuration="19.401269225s" podCreationTimestamp="2025-08-13 00:04:11 +0000 UTC" firstStartedPulling="2025-08-13 00:04:13.133850741 +0000 UTC m=+7.985711596" lastFinishedPulling="2025-08-13 00:04:25.614245565 +0000 UTC m=+20.466106420" observedRunningTime="2025-08-13 00:04:30.399977225 +0000 UTC m=+25.251838080" watchObservedRunningTime="2025-08-13 00:04:30.401269225 +0000 UTC m=+25.253130080" Aug 13 00:04:30.676595 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:04:31.379075 kubelet[1911]: E0813 00:04:31.379038 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:32.332245 systemd-networkd[1047]: cilium_host: Link UP Aug 13 00:04:32.332365 systemd-networkd[1047]: cilium_net: Link UP Aug 13 00:04:32.333776 systemd-networkd[1047]: cilium_net: Gained carrier Aug 13 00:04:32.334634 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 00:04:32.334715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:04:32.335126 systemd-networkd[1047]: cilium_host: Gained carrier Aug 13 00:04:32.381022 kubelet[1911]: E0813 00:04:32.380923 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:32.467377 systemd-networkd[1047]: cilium_vxlan: Link UP Aug 13 00:04:32.467382 systemd-networkd[1047]: cilium_vxlan: Gained carrier Aug 13 00:04:32.615688 systemd-networkd[1047]: cilium_net: Gained IPv6LL Aug 13 00:04:32.715450 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:57580.service. Aug 13 00:04:32.760059 sshd[2834]: Accepted publickey for core from 10.0.0.1 port 57580 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:32.761810 sshd[2834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:32.767629 systemd-logind[1211]: New session 6 of user core. Aug 13 00:04:32.768365 systemd[1]: Started session-6.scope. Aug 13 00:04:32.921685 kernel: NET: Registered PF_ALG protocol family Aug 13 00:04:32.936662 sshd[2834]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:32.940621 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:57580.service: Deactivated successfully. Aug 13 00:04:32.941509 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:04:32.942701 systemd-logind[1211]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:04:32.944192 systemd-logind[1211]: Removed session 6. Aug 13 00:04:33.039680 systemd-networkd[1047]: cilium_host: Gained IPv6LL Aug 13 00:04:33.564397 systemd-networkd[1047]: lxc_health: Link UP Aug 13 00:04:33.572843 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:04:33.572687 systemd-networkd[1047]: lxc_health: Gained carrier Aug 13 00:04:33.679728 systemd-networkd[1047]: cilium_vxlan: Gained IPv6LL Aug 13 00:04:33.833150 systemd-networkd[1047]: lxc2a95d99004fe: Link UP Aug 13 00:04:33.836298 systemd-networkd[1047]: lxce6a7e6a0c8b2: Link UP Aug 13 00:04:33.858619 kernel: eth0: renamed from tmp00da0 Aug 13 00:04:33.866617 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2a95d99004fe: link becomes ready Aug 13 00:04:33.866409 systemd-networkd[1047]: lxc2a95d99004fe: Gained carrier Aug 13 00:04:33.868589 kernel: eth0: renamed from tmpa1f70 Aug 13 00:04:33.876597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce6a7e6a0c8b2: link becomes ready Aug 13 00:04:33.876794 systemd-networkd[1047]: lxce6a7e6a0c8b2: Gained carrier Aug 13 00:04:34.920526 kubelet[1911]: E0813 00:04:34.920492 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:35.023721 systemd-networkd[1047]: lxce6a7e6a0c8b2: Gained IPv6LL Aug 13 00:04:35.389038 kubelet[1911]: E0813 00:04:35.388990 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:35.471705 systemd-networkd[1047]: lxc_health: Gained IPv6LL Aug 13 00:04:35.663704 systemd-networkd[1047]: lxc2a95d99004fe: Gained IPv6LL Aug 13 00:04:36.391433 kubelet[1911]: E0813 00:04:36.391388 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:37.667250 env[1223]: time="2025-08-13T00:04:37.667161326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:37.667250 env[1223]: time="2025-08-13T00:04:37.667208100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:37.667250 env[1223]: time="2025-08-13T00:04:37.667233948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:37.667721 env[1223]: time="2025-08-13T00:04:37.667410201Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00da0d1a829838eb88bafe3d319f1fc921f0c4921319a20b38bcd81b8d76554b pid=3153 runtime=io.containerd.runc.v2 Aug 13 00:04:37.672300 env[1223]: time="2025-08-13T00:04:37.672202126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:04:37.672300 env[1223]: time="2025-08-13T00:04:37.672248660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:04:37.672300 env[1223]: time="2025-08-13T00:04:37.672266146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:04:37.672504 env[1223]: time="2025-08-13T00:04:37.672415831Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1f706143d89fdc901ba4dd2d128f2909be313d28d1f8043e2174136016903e1 pid=3162 runtime=io.containerd.runc.v2 Aug 13 00:04:37.687214 systemd[1]: run-containerd-runc-k8s.io-00da0d1a829838eb88bafe3d319f1fc921f0c4921319a20b38bcd81b8d76554b-runc.axHt9Q.mount: Deactivated successfully. Aug 13 00:04:37.692673 systemd[1]: Started cri-containerd-00da0d1a829838eb88bafe3d319f1fc921f0c4921319a20b38bcd81b8d76554b.scope. Aug 13 00:04:37.696010 systemd[1]: Started cri-containerd-a1f706143d89fdc901ba4dd2d128f2909be313d28d1f8043e2174136016903e1.scope. Aug 13 00:04:37.746596 systemd-resolved[1162]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:04:37.756974 systemd-resolved[1162]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:04:37.772557 env[1223]: time="2025-08-13T00:04:37.772199084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-czdpb,Uid:bee5d144-fb54-4beb-9617-3e75200fe6f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"00da0d1a829838eb88bafe3d319f1fc921f0c4921319a20b38bcd81b8d76554b\"" Aug 13 00:04:37.773021 kubelet[1911]: E0813 00:04:37.772983 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:37.776328 env[1223]: time="2025-08-13T00:04:37.776030999Z" level=info msg="CreateContainer within sandbox \"00da0d1a829838eb88bafe3d319f1fc921f0c4921319a20b38bcd81b8d76554b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:04:37.780581 env[1223]: time="2025-08-13T00:04:37.780521673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8mvzb,Uid:e1da2e5d-485d-4b1a-9585-e7a167f6c28c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1f706143d89fdc901ba4dd2d128f2909be313d28d1f8043e2174136016903e1\"" Aug 13 00:04:37.782279 kubelet[1911]: E0813 00:04:37.782234 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:37.786591 env[1223]: time="2025-08-13T00:04:37.786529445Z" level=info msg="CreateContainer within sandbox \"a1f706143d89fdc901ba4dd2d128f2909be313d28d1f8043e2174136016903e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:04:37.800395 env[1223]: time="2025-08-13T00:04:37.800330247Z" level=info msg="CreateContainer within sandbox \"00da0d1a829838eb88bafe3d319f1fc921f0c4921319a20b38bcd81b8d76554b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a99ee8adde740c8d005a775c0d6eb747fc9ff2da3447719b9c258b2387e55f4\"" Aug 13 00:04:37.801230 env[1223]: time="2025-08-13T00:04:37.801190747Z" level=info msg="StartContainer for \"1a99ee8adde740c8d005a775c0d6eb747fc9ff2da3447719b9c258b2387e55f4\"" Aug 13 00:04:37.805539 env[1223]: time="2025-08-13T00:04:37.805482161Z" level=info msg="CreateContainer within sandbox \"a1f706143d89fdc901ba4dd2d128f2909be313d28d1f8043e2174136016903e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9698bd835b5ec953f0d3182ec887e306bb53af8b084fcb6cbd4696ea5c7ea5d1\"" Aug 13 00:04:37.806164 env[1223]: time="2025-08-13T00:04:37.806017483Z" level=info msg="StartContainer for \"9698bd835b5ec953f0d3182ec887e306bb53af8b084fcb6cbd4696ea5c7ea5d1\"" Aug 13 00:04:37.822684 systemd[1]: Started cri-containerd-9698bd835b5ec953f0d3182ec887e306bb53af8b084fcb6cbd4696ea5c7ea5d1.scope. Aug 13 00:04:37.824944 systemd[1]: Started cri-containerd-1a99ee8adde740c8d005a775c0d6eb747fc9ff2da3447719b9c258b2387e55f4.scope. Aug 13 00:04:37.882130 env[1223]: time="2025-08-13T00:04:37.882068138Z" level=info msg="StartContainer for \"9698bd835b5ec953f0d3182ec887e306bb53af8b084fcb6cbd4696ea5c7ea5d1\" returns successfully" Aug 13 00:04:37.882924 env[1223]: time="2025-08-13T00:04:37.882860537Z" level=info msg="StartContainer for \"1a99ee8adde740c8d005a775c0d6eb747fc9ff2da3447719b9c258b2387e55f4\" returns successfully" Aug 13 00:04:37.944630 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:57584.service. Aug 13 00:04:37.992225 sshd[3292]: Accepted publickey for core from 10.0.0.1 port 57584 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:38.001027 sshd[3292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:38.008632 systemd[1]: Started session-7.scope. Aug 13 00:04:38.009213 systemd-logind[1211]: New session 7 of user core. Aug 13 00:04:38.133909 sshd[3292]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:38.136952 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:57584.service: Deactivated successfully. Aug 13 00:04:38.137769 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:04:38.138308 systemd-logind[1211]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:04:38.139154 systemd-logind[1211]: Removed session 7. Aug 13 00:04:38.398178 kubelet[1911]: E0813 00:04:38.397977 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:38.400843 kubelet[1911]: E0813 00:04:38.400813 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:38.410852 kubelet[1911]: I0813 00:04:38.410727 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8mvzb" podStartSLOduration=27.410710697 podStartE2EDuration="27.410710697s" podCreationTimestamp="2025-08-13 00:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:04:38.410246603 +0000 UTC m=+33.262107458" watchObservedRunningTime="2025-08-13 00:04:38.410710697 +0000 UTC m=+33.262571552" Aug 13 00:04:38.452468 kubelet[1911]: I0813 00:04:38.452401 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-czdpb" podStartSLOduration=27.452380406 podStartE2EDuration="27.452380406s" podCreationTimestamp="2025-08-13 00:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:04:38.439428002 +0000 UTC m=+33.291288857" watchObservedRunningTime="2025-08-13 00:04:38.452380406 +0000 UTC m=+33.304241261" Aug 13 00:04:39.402915 kubelet[1911]: E0813 00:04:39.402523 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:39.402915 kubelet[1911]: E0813 00:04:39.402597 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:40.404904 kubelet[1911]: E0813 00:04:40.404876 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:40.405251 kubelet[1911]: E0813 00:04:40.405020 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:04:43.139931 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:54916.service. Aug 13 00:04:43.180634 sshd[3320]: Accepted publickey for core from 10.0.0.1 port 54916 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:43.181085 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:43.191771 systemd-logind[1211]: New session 8 of user core. Aug 13 00:04:43.192433 systemd[1]: Started session-8.scope. Aug 13 00:04:43.345607 sshd[3320]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:43.348310 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:54916.service: Deactivated successfully. Aug 13 00:04:43.349143 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:04:43.349792 systemd-logind[1211]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:04:43.350743 systemd-logind[1211]: Removed session 8. Aug 13 00:04:48.351495 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:54922.service. Aug 13 00:04:48.391177 sshd[3334]: Accepted publickey for core from 10.0.0.1 port 54922 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:48.392474 sshd[3334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:48.397690 systemd[1]: Started session-9.scope. Aug 13 00:04:48.397930 systemd-logind[1211]: New session 9 of user core. Aug 13 00:04:48.508196 sshd[3334]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:48.511094 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:54922.service: Deactivated successfully. Aug 13 00:04:48.511872 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:04:48.512439 systemd-logind[1211]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:04:48.513674 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:54932.service. Aug 13 00:04:48.514387 systemd-logind[1211]: Removed session 9. Aug 13 00:04:48.549607 sshd[3349]: Accepted publickey for core from 10.0.0.1 port 54932 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:48.551293 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:48.554797 systemd-logind[1211]: New session 10 of user core. Aug 13 00:04:48.555514 systemd[1]: Started session-10.scope. Aug 13 00:04:48.703859 sshd[3349]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:48.711228 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:54942.service. Aug 13 00:04:48.712032 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:54932.service: Deactivated successfully. Aug 13 00:04:48.713432 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:04:48.715758 systemd-logind[1211]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:04:48.722939 systemd-logind[1211]: Removed session 10. Aug 13 00:04:48.758889 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 54942 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:48.760588 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:48.764227 systemd-logind[1211]: New session 11 of user core. Aug 13 00:04:48.765329 systemd[1]: Started session-11.scope. Aug 13 00:04:48.887056 sshd[3359]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:48.893009 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:54942.service: Deactivated successfully. Aug 13 00:04:48.893840 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:04:48.894528 systemd-logind[1211]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:04:48.895176 systemd-logind[1211]: Removed session 11. Aug 13 00:04:53.894776 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:52136.service. Aug 13 00:04:53.934030 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 52136 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:53.935674 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:53.939388 systemd-logind[1211]: New session 12 of user core. Aug 13 00:04:53.940529 systemd[1]: Started session-12.scope. Aug 13 00:04:54.092803 sshd[3373]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:54.096894 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:52136.service: Deactivated successfully. Aug 13 00:04:54.097835 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:04:54.099143 systemd-logind[1211]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:04:54.100204 systemd-logind[1211]: Removed session 12. Aug 13 00:04:59.098084 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:52150.service. Aug 13 00:04:59.135700 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 52150 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:59.137416 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:59.141558 systemd-logind[1211]: New session 13 of user core. Aug 13 00:04:59.142556 systemd[1]: Started session-13.scope. Aug 13 00:04:59.259187 sshd[3387]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:59.263930 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:52152.service. Aug 13 00:04:59.264778 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:52150.service: Deactivated successfully. Aug 13 00:04:59.265592 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:04:59.267939 systemd-logind[1211]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:04:59.269243 systemd-logind[1211]: Removed session 13. Aug 13 00:04:59.300736 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 52152 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:59.302163 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:59.306083 systemd-logind[1211]: New session 14 of user core. Aug 13 00:04:59.306998 systemd[1]: Started session-14.scope. Aug 13 00:04:59.529258 sshd[3399]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:59.532532 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:52168.service. Aug 13 00:04:59.533679 systemd-logind[1211]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:04:59.533912 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:52152.service: Deactivated successfully. Aug 13 00:04:59.534849 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:04:59.535651 systemd-logind[1211]: Removed session 14. Aug 13 00:04:59.571654 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 52168 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:04:59.573453 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:59.577000 systemd-logind[1211]: New session 15 of user core. Aug 13 00:04:59.577900 systemd[1]: Started session-15.scope. Aug 13 00:05:00.212885 sshd[3411]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:00.216957 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:52174.service. Aug 13 00:05:00.217445 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:52168.service: Deactivated successfully. Aug 13 00:05:00.218291 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:05:00.223334 systemd-logind[1211]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:05:00.224786 systemd-logind[1211]: Removed session 15. Aug 13 00:05:00.264745 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 52174 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:00.266135 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:00.269704 systemd-logind[1211]: New session 16 of user core. Aug 13 00:05:00.270535 systemd[1]: Started session-16.scope. Aug 13 00:05:00.491370 sshd[3430]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:00.496316 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:52186.service. Aug 13 00:05:00.497042 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:52174.service: Deactivated successfully. Aug 13 00:05:00.498106 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:05:00.498723 systemd-logind[1211]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:05:00.499699 systemd-logind[1211]: Removed session 16. Aug 13 00:05:00.535660 sshd[3445]: Accepted publickey for core from 10.0.0.1 port 52186 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:00.537139 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:00.543443 systemd[1]: Started session-17.scope. Aug 13 00:05:00.544080 systemd-logind[1211]: New session 17 of user core. Aug 13 00:05:00.662487 sshd[3445]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:00.665312 systemd-logind[1211]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:05:00.665468 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:05:00.666397 systemd-logind[1211]: Removed session 17. Aug 13 00:05:00.666764 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:52186.service: Deactivated successfully. Aug 13 00:05:05.668142 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:46558.service. Aug 13 00:05:05.709279 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 46558 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:05.710991 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:05.715812 systemd[1]: Started session-18.scope. Aug 13 00:05:05.717118 systemd-logind[1211]: New session 18 of user core. Aug 13 00:05:05.835640 sshd[3461]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:05.838651 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:46558.service: Deactivated successfully. Aug 13 00:05:05.839962 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:05:05.840630 systemd-logind[1211]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:05:05.842017 systemd-logind[1211]: Removed session 18. Aug 13 00:05:10.841365 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:46566.service. Aug 13 00:05:10.876576 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 46566 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:10.878328 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:10.882604 systemd-logind[1211]: New session 19 of user core. Aug 13 00:05:10.883248 systemd[1]: Started session-19.scope. Aug 13 00:05:10.992700 sshd[3477]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:10.995382 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:46566.service: Deactivated successfully. Aug 13 00:05:10.996163 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:05:10.996894 systemd-logind[1211]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:05:10.997629 systemd-logind[1211]: Removed session 19. Aug 13 00:05:15.997706 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:36714.service. Aug 13 00:05:16.036817 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 36714 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:16.038621 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:16.045957 systemd[1]: Started session-20.scope. Aug 13 00:05:16.046274 systemd-logind[1211]: New session 20 of user core. Aug 13 00:05:16.179608 sshd[3493]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:16.183237 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:36714.service: Deactivated successfully. Aug 13 00:05:16.184078 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:05:16.184706 systemd-logind[1211]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:05:16.185582 systemd-logind[1211]: Removed session 20. Aug 13 00:05:18.272926 kubelet[1911]: E0813 00:05:18.272887 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:21.184031 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:36722.service. Aug 13 00:05:21.223869 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 36722 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:21.226969 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:21.232339 systemd-logind[1211]: New session 21 of user core. Aug 13 00:05:21.232828 systemd[1]: Started session-21.scope. Aug 13 00:05:21.272028 kubelet[1911]: E0813 00:05:21.271985 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:21.361238 sshd[3506]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:21.365151 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:36730.service. Aug 13 00:05:21.369239 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:05:21.369930 systemd-logind[1211]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:05:21.370096 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:36722.service: Deactivated successfully. Aug 13 00:05:21.378267 systemd-logind[1211]: Removed session 21. Aug 13 00:05:21.443140 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:21.444447 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:21.448419 systemd-logind[1211]: New session 22 of user core. Aug 13 00:05:21.449223 systemd[1]: Started session-22.scope. Aug 13 00:05:22.819249 env[1223]: time="2025-08-13T00:05:22.817543463Z" level=info msg="StopContainer for \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\" with timeout 30 (s)" Aug 13 00:05:22.819249 env[1223]: time="2025-08-13T00:05:22.818400132Z" level=info msg="Stop container \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\" with signal terminated" Aug 13 00:05:22.833068 systemd[1]: cri-containerd-c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6.scope: Deactivated successfully. Aug 13 00:05:22.835414 systemd[1]: run-containerd-runc-k8s.io-647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3-runc.bDm97X.mount: Deactivated successfully. Aug 13 00:05:22.858921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6-rootfs.mount: Deactivated successfully. Aug 13 00:05:22.868627 env[1223]: time="2025-08-13T00:05:22.868579081Z" level=info msg="shim disconnected" id=c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6 Aug 13 00:05:22.868935 env[1223]: time="2025-08-13T00:05:22.868914806Z" level=warning msg="cleaning up after shim disconnected" id=c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6 namespace=k8s.io Aug 13 00:05:22.869010 env[1223]: time="2025-08-13T00:05:22.868995157Z" level=info msg="cleaning up dead shim" Aug 13 00:05:22.871756 env[1223]: time="2025-08-13T00:05:22.871702150Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:05:22.877873 env[1223]: time="2025-08-13T00:05:22.877833858Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3565 runtime=io.containerd.runc.v2\n" Aug 13 00:05:22.878246 env[1223]: time="2025-08-13T00:05:22.878206939Z" level=info msg="StopContainer for \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\" with timeout 2 (s)" Aug 13 00:05:22.878570 env[1223]: time="2025-08-13T00:05:22.878527505Z" level=info msg="Stop container \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\" with signal terminated" Aug 13 00:05:22.880916 env[1223]: time="2025-08-13T00:05:22.880877975Z" level=info msg="StopContainer for \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\" returns successfully" Aug 13 00:05:22.881720 env[1223]: time="2025-08-13T00:05:22.881685569Z" level=info msg="StopPodSandbox for \"3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085\"" Aug 13 00:05:22.881797 env[1223]: time="2025-08-13T00:05:22.881753522Z" level=info msg="Container to stop \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:22.883504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085-shm.mount: Deactivated successfully. Aug 13 00:05:22.886351 systemd-networkd[1047]: lxc_health: Link DOWN Aug 13 00:05:22.886359 systemd-networkd[1047]: lxc_health: Lost carrier Aug 13 00:05:22.891516 systemd[1]: cri-containerd-3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085.scope: Deactivated successfully. Aug 13 00:05:22.913282 systemd[1]: cri-containerd-647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3.scope: Deactivated successfully. Aug 13 00:05:22.913640 systemd[1]: cri-containerd-647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3.scope: Consumed 7.613s CPU time. Aug 13 00:05:22.932125 env[1223]: time="2025-08-13T00:05:22.932079735Z" level=info msg="shim disconnected" id=3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085 Aug 13 00:05:22.932693 env[1223]: time="2025-08-13T00:05:22.932665313Z" level=warning msg="cleaning up after shim disconnected" id=3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085 namespace=k8s.io Aug 13 00:05:22.932790 env[1223]: time="2025-08-13T00:05:22.932775341Z" level=info msg="cleaning up dead shim" Aug 13 00:05:22.933076 env[1223]: time="2025-08-13T00:05:22.933041633Z" level=info msg="shim disconnected" id=647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3 Aug 13 00:05:22.933174 env[1223]: time="2025-08-13T00:05:22.933157621Z" level=warning msg="cleaning up after shim disconnected" id=647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3 namespace=k8s.io Aug 13 00:05:22.933241 env[1223]: time="2025-08-13T00:05:22.933225974Z" level=info msg="cleaning up dead shim" Aug 13 00:05:22.940696 env[1223]: time="2025-08-13T00:05:22.940650265Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3620 runtime=io.containerd.runc.v2\n" Aug 13 00:05:22.941197 env[1223]: time="2025-08-13T00:05:22.941167450Z" level=info msg="TearDown network for sandbox \"3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085\" successfully" Aug 13 00:05:22.941298 env[1223]: time="2025-08-13T00:05:22.941278958Z" level=info msg="StopPodSandbox for \"3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085\" returns successfully" Aug 13 00:05:22.942524 env[1223]: time="2025-08-13T00:05:22.942498149Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3621 runtime=io.containerd.runc.v2\n" Aug 13 00:05:22.944774 env[1223]: time="2025-08-13T00:05:22.944740550Z" level=info msg="StopContainer for \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\" returns successfully" Aug 13 00:05:22.945770 env[1223]: time="2025-08-13T00:05:22.945672011Z" level=info msg="StopPodSandbox for \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\"" Aug 13 00:05:22.945770 env[1223]: time="2025-08-13T00:05:22.945738004Z" level=info msg="Container to stop \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:22.945770 env[1223]: time="2025-08-13T00:05:22.945754363Z" level=info msg="Container to stop \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:22.945937 env[1223]: time="2025-08-13T00:05:22.945766081Z" level=info msg="Container to stop \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:22.945937 env[1223]: time="2025-08-13T00:05:22.945790199Z" level=info msg="Container to stop \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:22.945937 env[1223]: time="2025-08-13T00:05:22.945806357Z" level=info msg="Container to stop \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:22.953075 systemd[1]: cri-containerd-7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a.scope: Deactivated successfully. Aug 13 00:05:22.979201 env[1223]: time="2025-08-13T00:05:22.979155174Z" level=info msg="shim disconnected" id=7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a Aug 13 00:05:22.979580 env[1223]: time="2025-08-13T00:05:22.979549532Z" level=warning msg="cleaning up after shim disconnected" id=7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a namespace=k8s.io Aug 13 00:05:22.979665 env[1223]: time="2025-08-13T00:05:22.979649082Z" level=info msg="cleaning up dead shim" Aug 13 00:05:22.987177 env[1223]: time="2025-08-13T00:05:22.987134847Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3664 runtime=io.containerd.runc.v2\n" Aug 13 00:05:22.987736 env[1223]: time="2025-08-13T00:05:22.987659951Z" level=info msg="TearDown network for sandbox \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" successfully" Aug 13 00:05:22.987869 env[1223]: time="2025-08-13T00:05:22.987848851Z" level=info msg="StopPodSandbox for \"7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a\" returns successfully" Aug 13 00:05:22.990117 kubelet[1911]: I0813 00:05:22.990082 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8820c744-69cb-480a-92f1-d564216e814f-cilium-config-path\") pod \"8820c744-69cb-480a-92f1-d564216e814f\" (UID: \"8820c744-69cb-480a-92f1-d564216e814f\") " Aug 13 00:05:22.990406 kubelet[1911]: I0813 00:05:22.990131 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psnjf\" (UniqueName: \"kubernetes.io/projected/8820c744-69cb-480a-92f1-d564216e814f-kube-api-access-psnjf\") pod \"8820c744-69cb-480a-92f1-d564216e814f\" (UID: \"8820c744-69cb-480a-92f1-d564216e814f\") " Aug 13 00:05:22.996401 kubelet[1911]: I0813 00:05:22.996356 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8820c744-69cb-480a-92f1-d564216e814f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8820c744-69cb-480a-92f1-d564216e814f" (UID: "8820c744-69cb-480a-92f1-d564216e814f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:05:23.002801 kubelet[1911]: I0813 00:05:23.002752 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8820c744-69cb-480a-92f1-d564216e814f-kube-api-access-psnjf" (OuterVolumeSpecName: "kube-api-access-psnjf") pod "8820c744-69cb-480a-92f1-d564216e814f" (UID: "8820c744-69cb-480a-92f1-d564216e814f"). InnerVolumeSpecName "kube-api-access-psnjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:05:23.093537 kubelet[1911]: I0813 00:05:23.090850 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cni-path\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093537 kubelet[1911]: I0813 00:05:23.090898 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hubble-tls\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093537 kubelet[1911]: I0813 00:05:23.090914 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-lib-modules\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093537 kubelet[1911]: I0813 00:05:23.090928 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hostproc\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093537 kubelet[1911]: I0813 00:05:23.090944 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-net\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093537 kubelet[1911]: I0813 00:05:23.090958 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-run\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093825 kubelet[1911]: I0813 00:05:23.090975 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-config-path\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093825 kubelet[1911]: I0813 00:05:23.090989 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-etc-cni-netd\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093825 kubelet[1911]: I0813 00:05:23.091004 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-xtables-lock\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093825 kubelet[1911]: I0813 00:05:23.091018 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-kernel\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093825 kubelet[1911]: I0813 00:05:23.091039 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7lfl\" (UniqueName: \"kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-kube-api-access-p7lfl\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093825 kubelet[1911]: I0813 00:05:23.091055 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-bpf-maps\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093968 kubelet[1911]: I0813 00:05:23.091068 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-cgroup\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093968 kubelet[1911]: I0813 00:05:23.091086 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-clustermesh-secrets\") pod \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\" (UID: \"b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d\") " Aug 13 00:05:23.093968 kubelet[1911]: I0813 00:05:23.091122 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8820c744-69cb-480a-92f1-d564216e814f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.093968 kubelet[1911]: I0813 00:05:23.091133 1911 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-psnjf\" (UniqueName: \"kubernetes.io/projected/8820c744-69cb-480a-92f1-d564216e814f-kube-api-access-psnjf\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.093968 kubelet[1911]: I0813 00:05:23.091502 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.093968 kubelet[1911]: I0813 00:05:23.091522 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094106 kubelet[1911]: I0813 00:05:23.091548 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094106 kubelet[1911]: I0813 00:05:23.091587 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094106 kubelet[1911]: I0813 00:05:23.091604 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094106 kubelet[1911]: I0813 00:05:23.091616 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cni-path" (OuterVolumeSpecName: "cni-path") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094106 kubelet[1911]: I0813 00:05:23.092040 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094229 kubelet[1911]: I0813 00:05:23.092072 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094229 kubelet[1911]: I0813 00:05:23.092089 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hostproc" (OuterVolumeSpecName: "hostproc") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094229 kubelet[1911]: I0813 00:05:23.092104 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:23.094229 kubelet[1911]: I0813 00:05:23.093381 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:05:23.096732 kubelet[1911]: I0813 00:05:23.096642 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:05:23.097804 kubelet[1911]: I0813 00:05:23.097760 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:05:23.099288 kubelet[1911]: I0813 00:05:23.099232 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-kube-api-access-p7lfl" (OuterVolumeSpecName: "kube-api-access-p7lfl") pod "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" (UID: "b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d"). InnerVolumeSpecName "kube-api-access-p7lfl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:05:23.191713 kubelet[1911]: I0813 00:05:23.191661 1911 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191713 kubelet[1911]: I0813 00:05:23.191698 1911 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191713 kubelet[1911]: I0813 00:05:23.191709 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191713 kubelet[1911]: I0813 00:05:23.191717 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191727 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191743 1911 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191751 1911 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191759 1911 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191766 1911 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191774 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191782 1911 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.191939 kubelet[1911]: I0813 00:05:23.191790 1911 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7lfl\" (UniqueName: \"kubernetes.io/projected/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-kube-api-access-p7lfl\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.192120 kubelet[1911]: I0813 00:05:23.191798 1911 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.192120 kubelet[1911]: I0813 00:05:23.191811 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:23.280810 systemd[1]: Removed slice kubepods-besteffort-pod8820c744_69cb_480a_92f1_d564216e814f.slice. Aug 13 00:05:23.282836 systemd[1]: Removed slice kubepods-burstable-podb2c1c4f3_456c_45e3_b0db_ebd05ad6d13d.slice. Aug 13 00:05:23.282936 systemd[1]: kubepods-burstable-podb2c1c4f3_456c_45e3_b0db_ebd05ad6d13d.slice: Consumed 8.012s CPU time. Aug 13 00:05:23.497863 kubelet[1911]: I0813 00:05:23.497804 1911 scope.go:117] "RemoveContainer" containerID="647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3" Aug 13 00:05:23.499929 env[1223]: time="2025-08-13T00:05:23.499859127Z" level=info msg="RemoveContainer for \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\"" Aug 13 00:05:23.508552 env[1223]: time="2025-08-13T00:05:23.508503303Z" level=info msg="RemoveContainer for \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\" returns successfully" Aug 13 00:05:23.509023 kubelet[1911]: I0813 00:05:23.508992 1911 scope.go:117] "RemoveContainer" containerID="eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8" Aug 13 00:05:23.510299 env[1223]: time="2025-08-13T00:05:23.510259407Z" level=info msg="RemoveContainer for \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\"" Aug 13 00:05:23.514768 env[1223]: time="2025-08-13T00:05:23.514728121Z" level=info msg="RemoveContainer for \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\" returns successfully" Aug 13 00:05:23.515050 kubelet[1911]: I0813 00:05:23.515019 1911 scope.go:117] "RemoveContainer" containerID="fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea" Aug 13 00:05:23.518008 env[1223]: time="2025-08-13T00:05:23.517961717Z" level=info msg="RemoveContainer for \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\"" Aug 13 00:05:23.522102 env[1223]: time="2025-08-13T00:05:23.522054268Z" level=info msg="RemoveContainer for \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\" returns successfully" Aug 13 00:05:23.522779 kubelet[1911]: I0813 00:05:23.522755 1911 scope.go:117] "RemoveContainer" containerID="b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261" Aug 13 00:05:23.524546 env[1223]: time="2025-08-13T00:05:23.524481225Z" level=info msg="RemoveContainer for \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\"" Aug 13 00:05:23.531316 env[1223]: time="2025-08-13T00:05:23.531236070Z" level=info msg="RemoveContainer for \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\" returns successfully" Aug 13 00:05:23.532791 kubelet[1911]: I0813 00:05:23.532346 1911 scope.go:117] "RemoveContainer" containerID="f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2" Aug 13 00:05:23.534804 env[1223]: time="2025-08-13T00:05:23.534760558Z" level=info msg="RemoveContainer for \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\"" Aug 13 00:05:23.543131 env[1223]: time="2025-08-13T00:05:23.543070887Z" level=info msg="RemoveContainer for \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\" returns successfully" Aug 13 00:05:23.543536 kubelet[1911]: I0813 00:05:23.543505 1911 scope.go:117] "RemoveContainer" containerID="647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3" Aug 13 00:05:23.545317 env[1223]: time="2025-08-13T00:05:23.545225151Z" level=error msg="ContainerStatus for \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\": not found" Aug 13 00:05:23.545626 kubelet[1911]: E0813 00:05:23.545598 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\": not found" containerID="647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3" Aug 13 00:05:23.547336 kubelet[1911]: I0813 00:05:23.547191 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3"} err="failed to get container status \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3\": not found" Aug 13 00:05:23.547510 kubelet[1911]: I0813 00:05:23.547492 1911 scope.go:117] "RemoveContainer" containerID="eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8" Aug 13 00:05:23.548189 env[1223]: time="2025-08-13T00:05:23.548119782Z" level=error msg="ContainerStatus for \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\": not found" Aug 13 00:05:23.548327 kubelet[1911]: E0813 00:05:23.548301 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\": not found" containerID="eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8" Aug 13 00:05:23.548409 kubelet[1911]: I0813 00:05:23.548328 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8"} err="failed to get container status \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\": rpc error: code = NotFound desc = an error occurred when try to find container \"eecf506d82048a358b577e2aaaa2708941c7a7ddff578820b0acfe2c3f67add8\": not found" Aug 13 00:05:23.548409 kubelet[1911]: I0813 00:05:23.548348 1911 scope.go:117] "RemoveContainer" containerID="fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea" Aug 13 00:05:23.548579 env[1223]: time="2025-08-13T00:05:23.548512742Z" level=error msg="ContainerStatus for \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\": not found" Aug 13 00:05:23.548712 kubelet[1911]: E0813 00:05:23.548690 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\": not found" containerID="fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea" Aug 13 00:05:23.548766 kubelet[1911]: I0813 00:05:23.548717 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea"} err="failed to get container status \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe7fa66f874757c2b8224eaf57bee3c450f5026a105566c23306aae43f6319ea\": not found" Aug 13 00:05:23.548766 kubelet[1911]: I0813 00:05:23.548734 1911 scope.go:117] "RemoveContainer" containerID="b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261" Aug 13 00:05:23.549101 env[1223]: time="2025-08-13T00:05:23.548994334Z" level=error msg="ContainerStatus for \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\": not found" Aug 13 00:05:23.549166 kubelet[1911]: E0813 00:05:23.549131 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\": not found" containerID="b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261" Aug 13 00:05:23.549166 kubelet[1911]: I0813 00:05:23.549147 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261"} err="failed to get container status \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7e9cdb0e633a3d07da4de08fbdfa2dccfef3495045ef87310658a01443a7261\": not found" Aug 13 00:05:23.549166 kubelet[1911]: I0813 00:05:23.549159 1911 scope.go:117] "RemoveContainer" containerID="f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2" Aug 13 00:05:23.549365 env[1223]: time="2025-08-13T00:05:23.549313142Z" level=error msg="ContainerStatus for \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\": not found" Aug 13 00:05:23.549489 kubelet[1911]: E0813 00:05:23.549465 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\": not found" containerID="f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2" Aug 13 00:05:23.549537 kubelet[1911]: I0813 00:05:23.549491 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2"} err="failed to get container status \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f92ca12616580660d44fc82297a7151ea79b4f63eb9274062c315a62fd676ac2\": not found" Aug 13 00:05:23.549537 kubelet[1911]: I0813 00:05:23.549510 1911 scope.go:117] "RemoveContainer" containerID="c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6" Aug 13 00:05:23.551219 env[1223]: time="2025-08-13T00:05:23.550859908Z" level=info msg="RemoveContainer for \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\"" Aug 13 00:05:23.557229 env[1223]: time="2025-08-13T00:05:23.557180076Z" level=info msg="RemoveContainer for \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\" returns successfully" Aug 13 00:05:23.557661 kubelet[1911]: I0813 00:05:23.557633 1911 scope.go:117] "RemoveContainer" containerID="c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6" Aug 13 00:05:23.557984 env[1223]: time="2025-08-13T00:05:23.557919402Z" level=error msg="ContainerStatus for \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\": not found" Aug 13 00:05:23.558165 kubelet[1911]: E0813 00:05:23.558082 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\": not found" containerID="c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6" Aug 13 00:05:23.558165 kubelet[1911]: I0813 00:05:23.558118 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6"} err="failed to get container status \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c083a74870d65c2ea90a0cb4d021b77425995549da99f3f56c823e9b29bbf9d6\": not found" Aug 13 00:05:23.827263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-647743adc419cdd4dcf9638ec320b109aade1a77fcb44f664ca3a317ac9b8fa3-rootfs.mount: Deactivated successfully. Aug 13 00:05:23.827374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a-rootfs.mount: Deactivated successfully. Aug 13 00:05:23.827441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f1b97a5fd41b65223fb67b4df684e3ca5f76d7a13a4a99b1c99bcb9d58bee1a-shm.mount: Deactivated successfully. Aug 13 00:05:23.827499 systemd[1]: var-lib-kubelet-pods-b2c1c4f3\x2d456c\x2d45e3\x2db0db\x2debd05ad6d13d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:05:23.827556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cff24869746f7eb12f7397e68dd7011b3f9349da43126c7c4a525382032c085-rootfs.mount: Deactivated successfully. Aug 13 00:05:23.827629 systemd[1]: var-lib-kubelet-pods-8820c744\x2d69cb\x2d480a\x2d92f1\x2dd564216e814f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsnjf.mount: Deactivated successfully. Aug 13 00:05:23.827682 systemd[1]: var-lib-kubelet-pods-b2c1c4f3\x2d456c\x2d45e3\x2db0db\x2debd05ad6d13d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7lfl.mount: Deactivated successfully. Aug 13 00:05:23.827734 systemd[1]: var-lib-kubelet-pods-b2c1c4f3\x2d456c\x2d45e3\x2db0db\x2debd05ad6d13d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:05:24.750200 sshd[3519]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:24.753557 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:36730.service: Deactivated successfully. Aug 13 00:05:24.754207 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:05:24.755099 systemd-logind[1211]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:05:24.759855 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:34646.service. Aug 13 00:05:24.760981 systemd-logind[1211]: Removed session 22. Aug 13 00:05:24.798321 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 34646 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:24.799786 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:24.805944 systemd-logind[1211]: New session 23 of user core. Aug 13 00:05:24.806834 systemd[1]: Started session-23.scope. Aug 13 00:05:25.274484 kubelet[1911]: I0813 00:05:25.274449 1911 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8820c744-69cb-480a-92f1-d564216e814f" path="/var/lib/kubelet/pods/8820c744-69cb-480a-92f1-d564216e814f/volumes" Aug 13 00:05:25.275271 kubelet[1911]: I0813 00:05:25.275247 1911 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" path="/var/lib/kubelet/pods/b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d/volumes" Aug 13 00:05:25.306833 kubelet[1911]: E0813 00:05:25.306791 1911 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:05:25.501126 sshd[3683]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:25.504767 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:34646.service: Deactivated successfully. Aug 13 00:05:25.505452 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:05:25.508112 systemd[1]: Started sshd@23-10.0.0.82:22-10.0.0.1:34652.service. Aug 13 00:05:25.508934 systemd-logind[1211]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:05:25.510494 systemd-logind[1211]: Removed session 23. Aug 13 00:05:25.526612 kubelet[1911]: I0813 00:05:25.526490 1911 memory_manager.go:355] "RemoveStaleState removing state" podUID="8820c744-69cb-480a-92f1-d564216e814f" containerName="cilium-operator" Aug 13 00:05:25.526612 kubelet[1911]: I0813 00:05:25.526524 1911 memory_manager.go:355] "RemoveStaleState removing state" podUID="b2c1c4f3-456c-45e3-b0db-ebd05ad6d13d" containerName="cilium-agent" Aug 13 00:05:25.534117 kubelet[1911]: W0813 00:05:25.533731 1911 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Aug 13 00:05:25.534117 kubelet[1911]: E0813 00:05:25.533779 1911 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Aug 13 00:05:25.534117 kubelet[1911]: I0813 00:05:25.533830 1911 status_manager.go:890] "Failed to get status for pod" podUID="6a43dcce-9485-4f64-b054-bf4a25759a26" pod="kube-system/cilium-8vklq" err="pods \"cilium-8vklq\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Aug 13 00:05:25.533966 systemd[1]: Created slice kubepods-burstable-pod6a43dcce_9485_4f64_b054_bf4a25759a26.slice. Aug 13 00:05:25.552141 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 34652 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:25.554182 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:25.558795 systemd-logind[1211]: New session 24 of user core. Aug 13 00:05:25.559664 systemd[1]: Started session-24.scope. Aug 13 00:05:25.610512 kubelet[1911]: I0813 00:05:25.610472 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-clustermesh-secrets\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610512 kubelet[1911]: I0813 00:05:25.610512 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-net\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610692 kubelet[1911]: I0813 00:05:25.610534 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-cgroup\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610692 kubelet[1911]: I0813 00:05:25.610554 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-etc-cni-netd\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610692 kubelet[1911]: I0813 00:05:25.610588 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-ipsec-secrets\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610692 kubelet[1911]: I0813 00:05:25.610607 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-bpf-maps\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610692 kubelet[1911]: I0813 00:05:25.610633 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-hostproc\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610692 kubelet[1911]: I0813 00:05:25.610661 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxfrk\" (UniqueName: \"kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-kube-api-access-bxfrk\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610833 kubelet[1911]: I0813 00:05:25.610680 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-xtables-lock\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610833 kubelet[1911]: I0813 00:05:25.610698 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cni-path\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610833 kubelet[1911]: I0813 00:05:25.610742 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-kernel\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610833 kubelet[1911]: I0813 00:05:25.610773 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-hubble-tls\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610833 kubelet[1911]: I0813 00:05:25.610800 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-run\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610833 kubelet[1911]: I0813 00:05:25.610815 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-lib-modules\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.610960 kubelet[1911]: I0813 00:05:25.610835 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-config-path\") pod \"cilium-8vklq\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " pod="kube-system/cilium-8vklq" Aug 13 00:05:25.693068 sshd[3695]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:25.697849 systemd[1]: Started sshd@24-10.0.0.82:22-10.0.0.1:34654.service. Aug 13 00:05:25.698468 systemd[1]: sshd@23-10.0.0.82:22-10.0.0.1:34652.service: Deactivated successfully. Aug 13 00:05:25.699249 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:05:25.700278 systemd-logind[1211]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:05:25.701926 systemd-logind[1211]: Removed session 24. Aug 13 00:05:25.707129 kubelet[1911]: E0813 00:05:25.707025 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-bxfrk lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-8vklq" podUID="6a43dcce-9485-4f64-b054-bf4a25759a26" Aug 13 00:05:25.744580 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 34654 ssh2: RSA SHA256:4jnu15cGk13pD8KfkQFNgDiqBqiu/IV3MiTKKCAhqJg Aug 13 00:05:25.746000 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:25.750845 systemd-logind[1211]: New session 25 of user core. Aug 13 00:05:25.750869 systemd[1]: Started session-25.scope. Aug 13 00:05:26.617493 kubelet[1911]: I0813 00:05:26.617427 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-xtables-lock\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.617493 kubelet[1911]: I0813 00:05:26.617477 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cni-path\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.617493 kubelet[1911]: I0813 00:05:26.617502 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-etc-cni-netd\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.617874 kubelet[1911]: I0813 00:05:26.617518 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-hostproc\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.617874 kubelet[1911]: I0813 00:05:26.617581 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-config-path\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.617874 kubelet[1911]: I0813 00:05:26.617580 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.617874 kubelet[1911]: I0813 00:05:26.617632 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cni-path" (OuterVolumeSpecName: "cni-path") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.617874 kubelet[1911]: I0813 00:05:26.617622 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-hostproc" (OuterVolumeSpecName: "hostproc") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.618039 kubelet[1911]: I0813 00:05:26.617621 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.618039 kubelet[1911]: I0813 00:05:26.617675 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-kernel\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.618039 kubelet[1911]: I0813 00:05:26.617693 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-cgroup\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.618039 kubelet[1911]: I0813 00:05:26.617721 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.618039 kubelet[1911]: I0813 00:05:26.617726 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.618168 kubelet[1911]: I0813 00:05:26.617821 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.618558 kubelet[1911]: I0813 00:05:26.618538 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-bpf-maps\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.618692 kubelet[1911]: I0813 00:05:26.618677 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxfrk\" (UniqueName: \"kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-kube-api-access-bxfrk\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.618766 kubelet[1911]: I0813 00:05:26.618753 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-hubble-tls\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.618840 kubelet[1911]: I0813 00:05:26.618828 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-run\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.618905 kubelet[1911]: I0813 00:05:26.618894 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-clustermesh-secrets\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.618984 kubelet[1911]: I0813 00:05:26.618971 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-lib-modules\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.619051 kubelet[1911]: I0813 00:05:26.619039 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-net\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.619117 kubelet[1911]: I0813 00:05:26.619105 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-ipsec-secrets\") pod \"6a43dcce-9485-4f64-b054-bf4a25759a26\" (UID: \"6a43dcce-9485-4f64-b054-bf4a25759a26\") " Aug 13 00:05:26.619222 kubelet[1911]: I0813 00:05:26.619209 1911 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.619280 kubelet[1911]: I0813 00:05:26.619270 1911 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.619343 kubelet[1911]: I0813 00:05:26.619331 1911 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.619402 kubelet[1911]: I0813 00:05:26.619392 1911 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.619500 kubelet[1911]: I0813 00:05:26.619433 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:05:26.619500 kubelet[1911]: I0813 00:05:26.619450 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.619575 kubelet[1911]: I0813 00:05:26.619519 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.619575 kubelet[1911]: I0813 00:05:26.619531 1911 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.619649 kubelet[1911]: I0813 00:05:26.619611 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.619649 kubelet[1911]: I0813 00:05:26.619630 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.619715 kubelet[1911]: I0813 00:05:26.619611 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:05:26.624238 kubelet[1911]: I0813 00:05:26.624194 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:05:26.624459 kubelet[1911]: I0813 00:05:26.624432 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:05:26.624597 systemd[1]: var-lib-kubelet-pods-6a43dcce\x2d9485\x2d4f64\x2db054\x2dbf4a25759a26-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:05:26.624786 systemd[1]: var-lib-kubelet-pods-6a43dcce\x2d9485\x2d4f64\x2db054\x2dbf4a25759a26-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:05:26.625647 kubelet[1911]: I0813 00:05:26.625615 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:05:26.625773 kubelet[1911]: I0813 00:05:26.625648 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-kube-api-access-bxfrk" (OuterVolumeSpecName: "kube-api-access-bxfrk") pod "6a43dcce-9485-4f64-b054-bf4a25759a26" (UID: "6a43dcce-9485-4f64-b054-bf4a25759a26"). InnerVolumeSpecName "kube-api-access-bxfrk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:05:26.717239 systemd[1]: var-lib-kubelet-pods-6a43dcce\x2d9485\x2d4f64\x2db054\x2dbf4a25759a26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbxfrk.mount: Deactivated successfully. Aug 13 00:05:26.717337 systemd[1]: var-lib-kubelet-pods-6a43dcce\x2d9485\x2d4f64\x2db054\x2dbf4a25759a26-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:05:26.720168 kubelet[1911]: I0813 00:05:26.720124 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.720168 kubelet[1911]: I0813 00:05:26.720162 1911 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxfrk\" (UniqueName: \"kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-kube-api-access-bxfrk\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.720168 kubelet[1911]: I0813 00:05:26.720178 1911 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a43dcce-9485-4f64-b054-bf4a25759a26-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.720351 kubelet[1911]: I0813 00:05:26.720186 1911 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.720351 kubelet[1911]: I0813 00:05:26.720194 1911 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.720351 kubelet[1911]: I0813 00:05:26.720203 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a43dcce-9485-4f64-b054-bf4a25759a26-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.720351 kubelet[1911]: I0813 00:05:26.720211 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.720351 kubelet[1911]: I0813 00:05:26.720218 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a43dcce-9485-4f64-b054-bf4a25759a26-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 00:05:26.884473 kubelet[1911]: I0813 00:05:26.884365 1911 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:05:26Z","lastTransitionTime":"2025-08-13T00:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:05:27.277451 systemd[1]: Removed slice kubepods-burstable-pod6a43dcce_9485_4f64_b054_bf4a25759a26.slice. Aug 13 00:05:27.556663 systemd[1]: Created slice kubepods-burstable-podfc53a1c6_b524_4a7b_9722_f5f38148a1a5.slice. Aug 13 00:05:27.625846 kubelet[1911]: I0813 00:05:27.625767 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-host-proc-sys-kernel\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.625846 kubelet[1911]: I0813 00:05:27.625821 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-bpf-maps\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.625846 kubelet[1911]: I0813 00:05:27.625840 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-etc-cni-netd\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626369 kubelet[1911]: I0813 00:05:27.625865 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kwfb\" (UniqueName: \"kubernetes.io/projected/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-kube-api-access-5kwfb\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626369 kubelet[1911]: I0813 00:05:27.625916 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-cilium-run\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626369 kubelet[1911]: I0813 00:05:27.625936 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-hubble-tls\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626369 kubelet[1911]: I0813 00:05:27.625952 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-hostproc\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626369 kubelet[1911]: I0813 00:05:27.625976 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-cilium-ipsec-secrets\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626369 kubelet[1911]: I0813 00:05:27.625994 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-lib-modules\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626504 kubelet[1911]: I0813 00:05:27.626010 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-cilium-cgroup\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626504 kubelet[1911]: I0813 00:05:27.626029 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-xtables-lock\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626504 kubelet[1911]: I0813 00:05:27.626053 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-clustermesh-secrets\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626504 kubelet[1911]: I0813 00:05:27.626075 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-cni-path\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626504 kubelet[1911]: I0813 00:05:27.626091 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-host-proc-sys-net\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.626504 kubelet[1911]: I0813 00:05:27.626130 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc53a1c6-b524-4a7b-9722-f5f38148a1a5-cilium-config-path\") pod \"cilium-qdrqh\" (UID: \"fc53a1c6-b524-4a7b-9722-f5f38148a1a5\") " pod="kube-system/cilium-qdrqh" Aug 13 00:05:27.859588 kubelet[1911]: E0813 00:05:27.859440 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:27.861083 env[1223]: time="2025-08-13T00:05:27.860037681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdrqh,Uid:fc53a1c6-b524-4a7b-9722-f5f38148a1a5,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:27.877062 env[1223]: time="2025-08-13T00:05:27.876844548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:27.877062 env[1223]: time="2025-08-13T00:05:27.877016015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:27.877675 env[1223]: time="2025-08-13T00:05:27.877036974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:27.878457 env[1223]: time="2025-08-13T00:05:27.878315675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7 pid=3739 runtime=io.containerd.runc.v2 Aug 13 00:05:27.908818 systemd[1]: Started cri-containerd-db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7.scope. Aug 13 00:05:27.973507 env[1223]: time="2025-08-13T00:05:27.973437439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qdrqh,Uid:fc53a1c6-b524-4a7b-9722-f5f38148a1a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\"" Aug 13 00:05:27.974222 kubelet[1911]: E0813 00:05:27.974193 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:27.976389 env[1223]: time="2025-08-13T00:05:27.976299179Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:05:27.988588 env[1223]: time="2025-08-13T00:05:27.988444765Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d30299d205cda0ee759464089d0e16254a577babd7437f2d201449838776cbe4\"" Aug 13 00:05:27.989036 env[1223]: time="2025-08-13T00:05:27.989009161Z" level=info msg="StartContainer for \"d30299d205cda0ee759464089d0e16254a577babd7437f2d201449838776cbe4\"" Aug 13 00:05:28.005513 systemd[1]: Started cri-containerd-d30299d205cda0ee759464089d0e16254a577babd7437f2d201449838776cbe4.scope. Aug 13 00:05:28.059676 env[1223]: time="2025-08-13T00:05:28.059605846Z" level=info msg="StartContainer for \"d30299d205cda0ee759464089d0e16254a577babd7437f2d201449838776cbe4\" returns successfully" Aug 13 00:05:28.077301 systemd[1]: cri-containerd-d30299d205cda0ee759464089d0e16254a577babd7437f2d201449838776cbe4.scope: Deactivated successfully. Aug 13 00:05:28.114885 env[1223]: time="2025-08-13T00:05:28.114765377Z" level=info msg="shim disconnected" id=d30299d205cda0ee759464089d0e16254a577babd7437f2d201449838776cbe4 Aug 13 00:05:28.115199 env[1223]: time="2025-08-13T00:05:28.115178267Z" level=warning msg="cleaning up after shim disconnected" id=d30299d205cda0ee759464089d0e16254a577babd7437f2d201449838776cbe4 namespace=k8s.io Aug 13 00:05:28.116760 env[1223]: time="2025-08-13T00:05:28.116718037Z" level=info msg="cleaning up dead shim" Aug 13 00:05:28.126613 env[1223]: time="2025-08-13T00:05:28.126551493Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3823 runtime=io.containerd.runc.v2\n" Aug 13 00:05:28.514907 kubelet[1911]: E0813 00:05:28.514871 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:28.519947 env[1223]: time="2025-08-13T00:05:28.519890934Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:05:28.532917 env[1223]: time="2025-08-13T00:05:28.532867245Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2fed0c52ca1b492927bbb0d4694923f885fdc2beb84a191b9b6c2bf00a8a1dab\"" Aug 13 00:05:28.534774 env[1223]: time="2025-08-13T00:05:28.534742870Z" level=info msg="StartContainer for \"2fed0c52ca1b492927bbb0d4694923f885fdc2beb84a191b9b6c2bf00a8a1dab\"" Aug 13 00:05:28.550521 systemd[1]: Started cri-containerd-2fed0c52ca1b492927bbb0d4694923f885fdc2beb84a191b9b6c2bf00a8a1dab.scope. Aug 13 00:05:28.590210 systemd[1]: cri-containerd-2fed0c52ca1b492927bbb0d4694923f885fdc2beb84a191b9b6c2bf00a8a1dab.scope: Deactivated successfully. Aug 13 00:05:28.601756 env[1223]: time="2025-08-13T00:05:28.601631202Z" level=info msg="StartContainer for \"2fed0c52ca1b492927bbb0d4694923f885fdc2beb84a191b9b6c2bf00a8a1dab\" returns successfully" Aug 13 00:05:28.741480 env[1223]: time="2025-08-13T00:05:28.741423154Z" level=info msg="shim disconnected" id=2fed0c52ca1b492927bbb0d4694923f885fdc2beb84a191b9b6c2bf00a8a1dab Aug 13 00:05:28.741480 env[1223]: time="2025-08-13T00:05:28.741475230Z" level=warning msg="cleaning up after shim disconnected" id=2fed0c52ca1b492927bbb0d4694923f885fdc2beb84a191b9b6c2bf00a8a1dab namespace=k8s.io Aug 13 00:05:28.741480 env[1223]: time="2025-08-13T00:05:28.741486109Z" level=info msg="cleaning up dead shim" Aug 13 00:05:28.751389 env[1223]: time="2025-08-13T00:05:28.751326685Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3883 runtime=io.containerd.runc.v2\n" Aug 13 00:05:29.275170 kubelet[1911]: I0813 00:05:29.275123 1911 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a43dcce-9485-4f64-b054-bf4a25759a26" path="/var/lib/kubelet/pods/6a43dcce-9485-4f64-b054-bf4a25759a26/volumes" Aug 13 00:05:29.275693 kubelet[1911]: E0813 00:05:29.275677 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:29.519027 kubelet[1911]: E0813 00:05:29.518972 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:29.526659 env[1223]: time="2025-08-13T00:05:29.522542966Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:05:29.618048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068393232.mount: Deactivated successfully. Aug 13 00:05:29.627533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108851562.mount: Deactivated successfully. Aug 13 00:05:29.634236 env[1223]: time="2025-08-13T00:05:29.634183550Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9f604a894357e68e263c97e4aeb382c312ae7f2163b06b411f451f3460dc647b\"" Aug 13 00:05:29.635074 env[1223]: time="2025-08-13T00:05:29.635040173Z" level=info msg="StartContainer for \"9f604a894357e68e263c97e4aeb382c312ae7f2163b06b411f451f3460dc647b\"" Aug 13 00:05:29.654617 systemd[1]: Started cri-containerd-9f604a894357e68e263c97e4aeb382c312ae7f2163b06b411f451f3460dc647b.scope. Aug 13 00:05:29.706070 env[1223]: time="2025-08-13T00:05:29.705762874Z" level=info msg="StartContainer for \"9f604a894357e68e263c97e4aeb382c312ae7f2163b06b411f451f3460dc647b\" returns successfully" Aug 13 00:05:29.706193 systemd[1]: cri-containerd-9f604a894357e68e263c97e4aeb382c312ae7f2163b06b411f451f3460dc647b.scope: Deactivated successfully. Aug 13 00:05:29.732835 env[1223]: time="2025-08-13T00:05:29.732771160Z" level=info msg="shim disconnected" id=9f604a894357e68e263c97e4aeb382c312ae7f2163b06b411f451f3460dc647b Aug 13 00:05:29.732835 env[1223]: time="2025-08-13T00:05:29.732821317Z" level=warning msg="cleaning up after shim disconnected" id=9f604a894357e68e263c97e4aeb382c312ae7f2163b06b411f451f3460dc647b namespace=k8s.io Aug 13 00:05:29.732835 env[1223]: time="2025-08-13T00:05:29.732830836Z" level=info msg="cleaning up dead shim" Aug 13 00:05:29.740658 env[1223]: time="2025-08-13T00:05:29.740589481Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3940 runtime=io.containerd.runc.v2\n" Aug 13 00:05:30.307513 kubelet[1911]: E0813 00:05:30.307458 1911 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:05:30.524374 kubelet[1911]: E0813 00:05:30.524316 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:30.526377 env[1223]: time="2025-08-13T00:05:30.526335230Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:05:30.545084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805970661.mount: Deactivated successfully. Aug 13 00:05:30.551493 env[1223]: time="2025-08-13T00:05:30.551393771Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0\"" Aug 13 00:05:30.553237 env[1223]: time="2025-08-13T00:05:30.552653094Z" level=info msg="StartContainer for \"5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0\"" Aug 13 00:05:30.570117 systemd[1]: Started cri-containerd-5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0.scope. Aug 13 00:05:30.606517 systemd[1]: cri-containerd-5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0.scope: Deactivated successfully. Aug 13 00:05:30.608880 env[1223]: time="2025-08-13T00:05:30.608838002Z" level=info msg="StartContainer for \"5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0\" returns successfully" Aug 13 00:05:30.634953 env[1223]: time="2025-08-13T00:05:30.634903521Z" level=info msg="shim disconnected" id=5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0 Aug 13 00:05:30.634953 env[1223]: time="2025-08-13T00:05:30.634953557Z" level=warning msg="cleaning up after shim disconnected" id=5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0 namespace=k8s.io Aug 13 00:05:30.635177 env[1223]: time="2025-08-13T00:05:30.634964957Z" level=info msg="cleaning up dead shim" Aug 13 00:05:30.645087 env[1223]: time="2025-08-13T00:05:30.645021499Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Aug 13 00:05:30.732810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a2c9b97aa08bcfec4c0708178858dbca446b96a56e449779497326012a50cc0-rootfs.mount: Deactivated successfully. Aug 13 00:05:31.528000 kubelet[1911]: E0813 00:05:31.527972 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:31.530047 env[1223]: time="2025-08-13T00:05:31.529951018Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:05:31.557630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404534365.mount: Deactivated successfully. Aug 13 00:05:31.567439 env[1223]: time="2025-08-13T00:05:31.566659620Z" level=info msg="CreateContainer within sandbox \"db7b8eff5bef268c6fb79eebbd4b1541cfcf7bc83589336409655b6577b5c9b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bbcc36ae85c3e1b626a9337d1057106a9ecd89922887371aa7a54b934b4f90e\"" Aug 13 00:05:31.569004 env[1223]: time="2025-08-13T00:05:31.568949371Z" level=info msg="StartContainer for \"3bbcc36ae85c3e1b626a9337d1057106a9ecd89922887371aa7a54b934b4f90e\"" Aug 13 00:05:31.584734 systemd[1]: Started cri-containerd-3bbcc36ae85c3e1b626a9337d1057106a9ecd89922887371aa7a54b934b4f90e.scope. Aug 13 00:05:31.618814 env[1223]: time="2025-08-13T00:05:31.618764032Z" level=info msg="StartContainer for \"3bbcc36ae85c3e1b626a9337d1057106a9ecd89922887371aa7a54b934b4f90e\" returns successfully" Aug 13 00:05:31.915601 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Aug 13 00:05:32.532668 kubelet[1911]: E0813 00:05:32.532623 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:32.551338 kubelet[1911]: I0813 00:05:32.551009 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qdrqh" podStartSLOduration=5.550992296 podStartE2EDuration="5.550992296s" podCreationTimestamp="2025-08-13 00:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:32.550744469 +0000 UTC m=+87.402605324" watchObservedRunningTime="2025-08-13 00:05:32.550992296 +0000 UTC m=+87.402853151" Aug 13 00:05:33.860355 kubelet[1911]: E0813 00:05:33.860326 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:34.042281 systemd[1]: run-containerd-runc-k8s.io-3bbcc36ae85c3e1b626a9337d1057106a9ecd89922887371aa7a54b934b4f90e-runc.kLcBf0.mount: Deactivated successfully. Aug 13 00:05:34.751684 systemd-networkd[1047]: lxc_health: Link UP Aug 13 00:05:34.760505 systemd-networkd[1047]: lxc_health: Gained carrier Aug 13 00:05:34.760691 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:05:35.861342 kubelet[1911]: E0813 00:05:35.861294 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:36.216820 systemd[1]: run-containerd-runc-k8s.io-3bbcc36ae85c3e1b626a9337d1057106a9ecd89922887371aa7a54b934b4f90e-runc.7O4CEm.mount: Deactivated successfully. Aug 13 00:05:36.463816 systemd-networkd[1047]: lxc_health: Gained IPv6LL Aug 13 00:05:36.541671 kubelet[1911]: E0813 00:05:36.541640 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:37.543383 kubelet[1911]: E0813 00:05:37.543047 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:05:38.407218 systemd[1]: run-containerd-runc-k8s.io-3bbcc36ae85c3e1b626a9337d1057106a9ecd89922887371aa7a54b934b4f90e-runc.AhVsrU.mount: Deactivated successfully. Aug 13 00:05:40.613872 kubelet[1911]: E0813 00:05:40.613835 1911 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47940->127.0.0.1:41239: write tcp 127.0.0.1:47940->127.0.0.1:41239: write: broken pipe Aug 13 00:05:40.621321 sshd[3707]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:40.624376 systemd[1]: sshd@24-10.0.0.82:22-10.0.0.1:34654.service: Deactivated successfully. Aug 13 00:05:40.625108 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:05:40.625615 systemd-logind[1211]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:05:40.626625 systemd-logind[1211]: Removed session 25. Aug 13 00:05:41.274516 kubelet[1911]: E0813 00:05:41.274469 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"