Jul 15 11:06:20.739994 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 11:06:20.740012 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Jul 15 10:06:30 -00 2025 Jul 15 11:06:20.740020 kernel: efi: EFI v2.70 by EDK II Jul 15 11:06:20.740026 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 15 11:06:20.740031 kernel: random: crng init done Jul 15 11:06:20.740036 kernel: ACPI: Early table checksum verification disabled Jul 15 11:06:20.740042 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 15 11:06:20.740049 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 11:06:20.740055 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740060 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740065 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740070 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740076 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740081 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740089 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740094 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740100 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:06:20.740105 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 11:06:20.740111 kernel: NUMA: Failed to initialise from firmware Jul 15 11:06:20.740117 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 11:06:20.740122 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 15 11:06:20.740128 kernel: Zone ranges: Jul 15 11:06:20.740133 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 11:06:20.740140 kernel: DMA32 empty Jul 15 11:06:20.740145 kernel: Normal empty Jul 15 11:06:20.740151 kernel: Movable zone start for each node Jul 15 11:06:20.740156 kernel: Early memory node ranges Jul 15 11:06:20.740162 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 15 11:06:20.740167 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 15 11:06:20.740173 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 15 11:06:20.740179 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 15 11:06:20.740184 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 15 11:06:20.740190 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 15 11:06:20.740195 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 15 11:06:20.740201 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 11:06:20.740208 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 11:06:20.740213 kernel: psci: probing for conduit method from ACPI. Jul 15 11:06:20.740219 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 11:06:20.740224 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 11:06:20.740230 kernel: psci: Trusted OS migration not required Jul 15 11:06:20.740238 kernel: psci: SMC Calling Convention v1.1 Jul 15 11:06:20.740244 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 11:06:20.740251 kernel: ACPI: SRAT not present Jul 15 11:06:20.740258 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 15 11:06:20.740264 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 15 11:06:20.740270 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 11:06:20.740276 kernel: Detected PIPT I-cache on CPU0 Jul 15 11:06:20.740282 kernel: CPU features: detected: GIC system register CPU interface Jul 15 11:06:20.740288 kernel: CPU features: detected: Hardware dirty bit management Jul 15 11:06:20.740294 kernel: CPU features: detected: Spectre-v4 Jul 15 11:06:20.740300 kernel: CPU features: detected: Spectre-BHB Jul 15 11:06:20.740307 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 11:06:20.740313 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 11:06:20.740319 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 11:06:20.740325 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 11:06:20.740331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 15 11:06:20.740337 kernel: Policy zone: DMA Jul 15 11:06:20.740344 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=66cb9a8d6ebbbd62ba3e197b019773f14f902d0ee05716ff2fc41a726e431e67 Jul 15 11:06:20.740355 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:06:20.740361 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:06:20.740381 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:06:20.740388 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:06:20.740396 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 15 11:06:20.740402 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:06:20.740408 kernel: trace event string verifier disabled Jul 15 11:06:20.740414 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 11:06:20.740420 kernel: rcu: RCU event tracing is enabled. Jul 15 11:06:20.740426 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:06:20.740433 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 11:06:20.740439 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:06:20.740445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:06:20.740451 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:06:20.740457 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 11:06:20.740464 kernel: GICv3: 256 SPIs implemented Jul 15 11:06:20.740470 kernel: GICv3: 0 Extended SPIs implemented Jul 15 11:06:20.740476 kernel: GICv3: Distributor has no Range Selector support Jul 15 11:06:20.740482 kernel: Root IRQ handler: gic_handle_irq Jul 15 11:06:20.740488 kernel: GICv3: 16 PPIs implemented Jul 15 11:06:20.740494 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 11:06:20.740500 kernel: ACPI: SRAT not present Jul 15 11:06:20.740506 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 11:06:20.740512 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 15 11:06:20.740522 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 15 11:06:20.740529 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 15 11:06:20.740535 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 15 11:06:20.740543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:06:20.740549 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 11:06:20.740555 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 11:06:20.740561 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 11:06:20.740567 kernel: arm-pv: using stolen time PV Jul 15 11:06:20.740574 kernel: Console: colour dummy device 80x25 Jul 15 11:06:20.740580 kernel: ACPI: Core revision 20210730 Jul 15 11:06:20.740586 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 11:06:20.740593 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:06:20.740599 kernel: LSM: Security Framework initializing Jul 15 11:06:20.740606 kernel: SELinux: Initializing. Jul 15 11:06:20.740612 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:06:20.740619 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:06:20.740631 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:06:20.740638 kernel: Platform MSI: ITS@0x8080000 domain created Jul 15 11:06:20.740644 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 15 11:06:20.740650 kernel: Remapping and enabling EFI services. Jul 15 11:06:20.740656 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:06:20.740663 kernel: Detected PIPT I-cache on CPU1 Jul 15 11:06:20.740671 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 11:06:20.740682 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 15 11:06:20.740690 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:06:20.740696 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 11:06:20.740702 kernel: Detected PIPT I-cache on CPU2 Jul 15 11:06:20.740709 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 11:06:20.740715 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 15 11:06:20.740721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:06:20.740727 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 11:06:20.740733 kernel: Detected PIPT I-cache on CPU3 Jul 15 11:06:20.740741 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 11:06:20.740748 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 15 11:06:20.740754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 11:06:20.740760 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 11:06:20.740771 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:06:20.740778 kernel: SMP: Total of 4 processors activated. Jul 15 11:06:20.740785 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 11:06:20.740791 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 11:06:20.740815 kernel: CPU features: detected: Common not Private translations Jul 15 11:06:20.740821 kernel: CPU features: detected: CRC32 instructions Jul 15 11:06:20.740828 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 11:06:20.740835 kernel: CPU features: detected: LSE atomic instructions Jul 15 11:06:20.740843 kernel: CPU features: detected: Privileged Access Never Jul 15 11:06:20.740849 kernel: CPU features: detected: RAS Extension Support Jul 15 11:06:20.740856 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 11:06:20.740862 kernel: CPU: All CPU(s) started at EL1 Jul 15 11:06:20.740869 kernel: alternatives: patching kernel code Jul 15 11:06:20.740876 kernel: devtmpfs: initialized Jul 15 11:06:20.740882 kernel: KASLR enabled Jul 15 11:06:20.740889 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:06:20.740896 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:06:20.740902 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:06:20.740909 kernel: SMBIOS 3.0.0 present. Jul 15 11:06:20.740915 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 15 11:06:20.740922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:06:20.740928 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 11:06:20.740936 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 11:06:20.740943 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 11:06:20.740950 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:06:20.740956 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Jul 15 11:06:20.740963 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:06:20.740969 kernel: cpuidle: using governor menu Jul 15 11:06:20.740975 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 11:06:20.740982 kernel: ASID allocator initialised with 32768 entries Jul 15 11:06:20.740988 kernel: ACPI: bus type PCI registered Jul 15 11:06:20.740996 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:06:20.741002 kernel: Serial: AMBA PL011 UART driver Jul 15 11:06:20.741009 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:06:20.741015 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 11:06:20.741022 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:06:20.741028 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 11:06:20.741035 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:06:20.741041 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 11:06:20.741048 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:06:20.741056 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:06:20.741062 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:06:20.741069 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:06:20.741075 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:06:20.741081 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:06:20.741088 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:06:20.741094 kernel: ACPI: Interpreter enabled Jul 15 11:06:20.741101 kernel: ACPI: Using GIC for interrupt routing Jul 15 11:06:20.741107 kernel: ACPI: MCFG table detected, 1 entries Jul 15 11:06:20.741115 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 11:06:20.741122 kernel: printk: console [ttyAMA0] enabled Jul 15 11:06:20.741128 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:06:20.741249 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:06:20.741313 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 11:06:20.741374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 11:06:20.741432 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 11:06:20.741491 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 11:06:20.741500 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 11:06:20.741506 kernel: PCI host bridge to bus 0000:00 Jul 15 11:06:20.741572 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 11:06:20.741637 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 11:06:20.741701 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 11:06:20.741755 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:06:20.741829 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 15 11:06:20.741903 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:06:20.741965 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 15 11:06:20.742025 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 15 11:06:20.742083 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 11:06:20.742141 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 11:06:20.742199 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 15 11:06:20.742259 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 15 11:06:20.742312 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 11:06:20.742363 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 11:06:20.742415 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 11:06:20.742423 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 11:06:20.742430 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 11:06:20.742437 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 11:06:20.742445 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 11:06:20.742451 kernel: iommu: Default domain type: Translated Jul 15 11:06:20.742458 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 11:06:20.742464 kernel: vgaarb: loaded Jul 15 11:06:20.742471 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:06:20.742477 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:06:20.742484 kernel: PTP clock support registered Jul 15 11:06:20.742490 kernel: Registered efivars operations Jul 15 11:06:20.742497 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 11:06:20.742503 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:06:20.742511 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:06:20.742518 kernel: pnp: PnP ACPI init Jul 15 11:06:20.742583 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 11:06:20.742592 kernel: pnp: PnP ACPI: found 1 devices Jul 15 11:06:20.742599 kernel: NET: Registered PF_INET protocol family Jul 15 11:06:20.742605 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:06:20.742612 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:06:20.742619 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:06:20.742638 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:06:20.742647 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:06:20.742654 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:06:20.742660 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:06:20.742667 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:06:20.742674 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:06:20.742685 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:06:20.742692 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 15 11:06:20.742699 kernel: kvm [1]: HYP mode not available Jul 15 11:06:20.742707 kernel: Initialise system trusted keyrings Jul 15 11:06:20.742714 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:06:20.742721 kernel: Key type asymmetric registered Jul 15 11:06:20.742727 kernel: Asymmetric key parser 'x509' registered Jul 15 11:06:20.742733 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:06:20.742740 kernel: io scheduler mq-deadline registered Jul 15 11:06:20.742746 kernel: io scheduler kyber registered Jul 15 11:06:20.742753 kernel: io scheduler bfq registered Jul 15 11:06:20.742759 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 11:06:20.742767 kernel: ACPI: button: Power Button [PWRB] Jul 15 11:06:20.742774 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 11:06:20.742842 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 11:06:20.742851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:06:20.742858 kernel: thunder_xcv, ver 1.0 Jul 15 11:06:20.742864 kernel: thunder_bgx, ver 1.0 Jul 15 11:06:20.743246 kernel: nicpf, ver 1.0 Jul 15 11:06:20.743263 kernel: nicvf, ver 1.0 Jul 15 11:06:20.743368 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 11:06:20.743432 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T11:06:20 UTC (1752577580) Jul 15 11:06:20.743442 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 11:06:20.743448 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:06:20.743455 kernel: Segment Routing with IPv6 Jul 15 11:06:20.743461 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:06:20.743468 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:06:20.743474 kernel: Key type dns_resolver registered Jul 15 11:06:20.743481 kernel: registered taskstats version 1 Jul 15 11:06:20.743490 kernel: Loading compiled-in X.509 certificates Jul 15 11:06:20.743496 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: 1835a6fea2ba29f82433ea6fde09cb345fc75fe9' Jul 15 11:06:20.743503 kernel: Key type .fscrypt registered Jul 15 11:06:20.743509 kernel: Key type fscrypt-provisioning registered Jul 15 11:06:20.743516 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:06:20.743522 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:06:20.743529 kernel: ima: No architecture policies found Jul 15 11:06:20.743535 kernel: clk: Disabling unused clocks Jul 15 11:06:20.743542 kernel: Freeing unused kernel memory: 36416K Jul 15 11:06:20.743549 kernel: Run /init as init process Jul 15 11:06:20.743556 kernel: with arguments: Jul 15 11:06:20.743562 kernel: /init Jul 15 11:06:20.743569 kernel: with environment: Jul 15 11:06:20.743585 kernel: HOME=/ Jul 15 11:06:20.743593 kernel: TERM=linux Jul 15 11:06:20.743599 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:06:20.743608 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:06:20.743619 systemd[1]: Detected virtualization kvm. Jul 15 11:06:20.743636 systemd[1]: Detected architecture arm64. Jul 15 11:06:20.743643 systemd[1]: Running in initrd. Jul 15 11:06:20.743650 systemd[1]: No hostname configured, using default hostname. Jul 15 11:06:20.743657 systemd[1]: Hostname set to . Jul 15 11:06:20.743664 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:06:20.743671 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:06:20.743685 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:06:20.743696 systemd[1]: Reached target cryptsetup.target. Jul 15 11:06:20.743703 systemd[1]: Reached target paths.target. Jul 15 11:06:20.743710 systemd[1]: Reached target slices.target. Jul 15 11:06:20.743717 systemd[1]: Reached target swap.target. Jul 15 11:06:20.743724 systemd[1]: Reached target timers.target. Jul 15 11:06:20.743731 systemd[1]: Listening on iscsid.socket. Jul 15 11:06:20.743738 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:06:20.743746 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:06:20.743753 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:06:20.743761 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:06:20.743767 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:06:20.743774 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:06:20.743781 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:06:20.743788 systemd[1]: Reached target sockets.target. Jul 15 11:06:20.743795 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:06:20.743802 systemd[1]: Finished network-cleanup.service. Jul 15 11:06:20.743811 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:06:20.743818 systemd[1]: Starting systemd-journald.service... Jul 15 11:06:20.743825 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:06:20.743832 systemd[1]: Starting systemd-resolved.service... Jul 15 11:06:20.743839 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:06:20.743846 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:06:20.743853 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:06:20.743860 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:06:20.743867 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:06:20.743875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:06:20.743885 systemd-journald[289]: Journal started Jul 15 11:06:20.743930 systemd-journald[289]: Runtime Journal (/run/log/journal/81bc0e12626749cfa3db8a5522ec692d) is 6.0M, max 48.7M, 42.6M free. Jul 15 11:06:20.736168 systemd-modules-load[290]: Inserted module 'overlay' Jul 15 11:06:20.746234 systemd[1]: Started systemd-journald.service. Jul 15 11:06:20.750389 kernel: audit: type=1130 audit(1752577580.746:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.750657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:06:20.756503 kernel: audit: type=1130 audit(1752577580.752:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.761647 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:06:20.763484 systemd-resolved[291]: Positive Trust Anchors: Jul 15 11:06:20.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.763500 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:06:20.768834 kernel: audit: type=1130 audit(1752577580.764:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.763528 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:06:20.763533 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:06:20.776057 kernel: Bridge firewalling registered Jul 15 11:06:20.767720 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 15 11:06:20.781153 kernel: audit: type=1130 audit(1752577580.776:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.768268 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:06:20.769569 systemd[1]: Started systemd-resolved.service. Jul 15 11:06:20.774816 systemd-modules-load[290]: Inserted module 'br_netfilter' Jul 15 11:06:20.777069 systemd[1]: Reached target nss-lookup.target. Jul 15 11:06:20.784832 dracut-cmdline[306]: dracut-dracut-053 Jul 15 11:06:20.786643 kernel: SCSI subsystem initialized Jul 15 11:06:20.786857 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=66cb9a8d6ebbbd62ba3e197b019773f14f902d0ee05716ff2fc41a726e431e67 Jul 15 11:06:20.793650 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:06:20.793675 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:06:20.793689 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:06:20.796884 systemd-modules-load[290]: Inserted module 'dm_multipath' Jul 15 11:06:20.802023 kernel: audit: type=1130 audit(1752577580.798:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.797894 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:06:20.799489 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:06:20.806542 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:06:20.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.810657 kernel: audit: type=1130 audit(1752577580.806:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.845658 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:06:20.858653 kernel: iscsi: registered transport (tcp) Jul 15 11:06:20.873659 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:06:20.873686 kernel: QLogic iSCSI HBA Driver Jul 15 11:06:20.906600 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:06:20.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.908229 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:06:20.911873 kernel: audit: type=1130 audit(1752577580.906:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:20.951665 kernel: raid6: neonx8 gen() 13656 MB/s Jul 15 11:06:20.968653 kernel: raid6: neonx8 xor() 10670 MB/s Jul 15 11:06:20.985653 kernel: raid6: neonx4 gen() 13426 MB/s Jul 15 11:06:21.002654 kernel: raid6: neonx4 xor() 11129 MB/s Jul 15 11:06:21.019664 kernel: raid6: neonx2 gen() 12905 MB/s Jul 15 11:06:21.036654 kernel: raid6: neonx2 xor() 10466 MB/s Jul 15 11:06:21.053656 kernel: raid6: neonx1 gen() 10549 MB/s Jul 15 11:06:21.070655 kernel: raid6: neonx1 xor() 8731 MB/s Jul 15 11:06:21.087653 kernel: raid6: int64x8 gen() 6213 MB/s Jul 15 11:06:21.104651 kernel: raid6: int64x8 xor() 3520 MB/s Jul 15 11:06:21.121652 kernel: raid6: int64x4 gen() 7179 MB/s Jul 15 11:06:21.138651 kernel: raid6: int64x4 xor() 3838 MB/s Jul 15 11:06:21.155655 kernel: raid6: int64x2 gen() 6124 MB/s Jul 15 11:06:21.172654 kernel: raid6: int64x2 xor() 3298 MB/s Jul 15 11:06:21.189657 kernel: raid6: int64x1 gen() 5020 MB/s Jul 15 11:06:21.206746 kernel: raid6: int64x1 xor() 2633 MB/s Jul 15 11:06:21.206761 kernel: raid6: using algorithm neonx8 gen() 13656 MB/s Jul 15 11:06:21.206770 kernel: raid6: .... xor() 10670 MB/s, rmw enabled Jul 15 11:06:21.207815 kernel: raid6: using neon recovery algorithm Jul 15 11:06:21.218656 kernel: xor: measuring software checksum speed Jul 15 11:06:21.218697 kernel: 8regs : 16911 MB/sec Jul 15 11:06:21.218716 kernel: 32regs : 20697 MB/sec Jul 15 11:06:21.219760 kernel: arm64_neon : 26665 MB/sec Jul 15 11:06:21.219772 kernel: xor: using function: arm64_neon (26665 MB/sec) Jul 15 11:06:21.274657 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 15 11:06:21.284985 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:06:21.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:21.286884 systemd[1]: Starting systemd-udevd.service... Jul 15 11:06:21.290960 kernel: audit: type=1130 audit(1752577581.285:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:21.290988 kernel: audit: type=1334 audit(1752577581.285:10): prog-id=7 op=LOAD Jul 15 11:06:21.285000 audit: BPF prog-id=7 op=LOAD Jul 15 11:06:21.285000 audit: BPF prog-id=8 op=LOAD Jul 15 11:06:21.305359 systemd-udevd[490]: Using default interface naming scheme 'v252'. Jul 15 11:06:21.309871 systemd[1]: Started systemd-udevd.service. Jul 15 11:06:21.311513 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:06:21.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:21.323350 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Jul 15 11:06:21.350932 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:06:21.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:21.352538 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:06:21.388188 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:06:21.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:21.417646 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:06:21.424818 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:06:21.424834 kernel: GPT:9289727 != 19775487 Jul 15 11:06:21.424848 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:06:21.424857 kernel: GPT:9289727 != 19775487 Jul 15 11:06:21.424865 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:06:21.424873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:06:21.439654 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (553) Jul 15 11:06:21.442398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:06:21.446163 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:06:21.447320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:06:21.451875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:06:21.455251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:06:21.457183 systemd[1]: Starting disk-uuid.service... Jul 15 11:06:21.465442 disk-uuid[562]: Primary Header is updated. Jul 15 11:06:21.465442 disk-uuid[562]: Secondary Entries is updated. Jul 15 11:06:21.465442 disk-uuid[562]: Secondary Header is updated. Jul 15 11:06:21.469650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:06:21.474653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:06:21.478656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:06:22.479570 disk-uuid[563]: The operation has completed successfully. Jul 15 11:06:22.480761 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:06:22.505147 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:06:22.505241 systemd[1]: Finished disk-uuid.service. Jul 15 11:06:22.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.506901 systemd[1]: Starting verity-setup.service... Jul 15 11:06:22.527647 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 15 11:06:22.548281 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:06:22.550538 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:06:22.552539 systemd[1]: Finished verity-setup.service. Jul 15 11:06:22.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.599659 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:06:22.599876 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:06:22.600747 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:06:22.601467 systemd[1]: Starting ignition-setup.service... Jul 15 11:06:22.603835 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:06:22.609982 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 11:06:22.610016 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:06:22.610026 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:06:22.618991 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:06:22.650654 systemd[1]: Finished ignition-setup.service. Jul 15 11:06:22.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.652262 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:06:22.679980 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:06:22.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.681000 audit: BPF prog-id=9 op=LOAD Jul 15 11:06:22.682248 systemd[1]: Starting systemd-networkd.service... Jul 15 11:06:22.712733 systemd-networkd[736]: lo: Link UP Jul 15 11:06:22.713614 systemd-networkd[736]: lo: Gained carrier Jul 15 11:06:22.714963 systemd-networkd[736]: Enumeration completed Jul 15 11:06:22.715999 systemd[1]: Started systemd-networkd.service. Jul 15 11:06:22.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.716878 systemd[1]: Reached target network.target. Jul 15 11:06:22.718988 systemd[1]: Starting iscsiuio.service... Jul 15 11:06:22.718996 systemd-networkd[736]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:06:22.723131 systemd-networkd[736]: eth0: Link UP Jul 15 11:06:22.723224 systemd-networkd[736]: eth0: Gained carrier Jul 15 11:06:22.733779 systemd[1]: Started iscsiuio.service. Jul 15 11:06:22.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.736430 systemd[1]: Starting iscsid.service... Jul 15 11:06:22.739736 systemd-networkd[736]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:06:22.741116 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:06:22.741116 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:06:22.741116 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:06:22.741116 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:06:22.741116 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:06:22.741116 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:06:22.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.750276 ignition[700]: Ignition 2.14.0 Jul 15 11:06:22.743904 systemd[1]: Started iscsid.service. Jul 15 11:06:22.750283 ignition[700]: Stage: fetch-offline Jul 15 11:06:22.750230 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:06:22.750321 ignition[700]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:06:22.750329 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:06:22.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.759849 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:06:22.750466 ignition[700]: parsed url from cmdline: "" Jul 15 11:06:22.760836 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:06:22.750469 ignition[700]: no config URL provided Jul 15 11:06:22.762591 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:06:22.750474 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:06:22.764291 systemd[1]: Reached target remote-fs.target. Jul 15 11:06:22.750481 ignition[700]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:06:22.766537 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:06:22.750498 ignition[700]: op(1): [started] loading QEMU firmware config module Jul 15 11:06:22.750502 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:06:22.754921 ignition[700]: op(1): [finished] loading QEMU firmware config module Jul 15 11:06:22.773883 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:06:22.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.807433 ignition[700]: parsing config with SHA512: 699aecb06cb35e560ef2a0010822fd74edb3cc1b49acf96393e93305c1ff2ec1da4b608d6e1270524a78058cbe1f87ad503b0335f5a12db8890170a62a982230 Jul 15 11:06:22.814401 unknown[700]: fetched base config from "system" Jul 15 11:06:22.814418 unknown[700]: fetched user config from "qemu" Jul 15 11:06:22.815102 ignition[700]: fetch-offline: fetch-offline passed Jul 15 11:06:22.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.816006 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:06:22.815182 ignition[700]: Ignition finished successfully Jul 15 11:06:22.817045 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:06:22.817765 systemd[1]: Starting ignition-kargs.service... Jul 15 11:06:22.826020 ignition[763]: Ignition 2.14.0 Jul 15 11:06:22.826030 ignition[763]: Stage: kargs Jul 15 11:06:22.826114 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:06:22.828234 systemd[1]: Finished ignition-kargs.service. Jul 15 11:06:22.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.826124 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:06:22.826935 ignition[763]: kargs: kargs passed Jul 15 11:06:22.830464 systemd[1]: Starting ignition-disks.service... Jul 15 11:06:22.826973 ignition[763]: Ignition finished successfully Jul 15 11:06:22.836286 ignition[769]: Ignition 2.14.0 Jul 15 11:06:22.836295 ignition[769]: Stage: disks Jul 15 11:06:22.836381 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:06:22.836390 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:06:22.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.837922 systemd[1]: Finished ignition-disks.service. Jul 15 11:06:22.837202 ignition[769]: disks: disks passed Jul 15 11:06:22.839310 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:06:22.837238 ignition[769]: Ignition finished successfully Jul 15 11:06:22.841147 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:06:22.842651 systemd[1]: Reached target local-fs.target. Jul 15 11:06:22.843892 systemd[1]: Reached target sysinit.target. Jul 15 11:06:22.845655 systemd[1]: Reached target basic.target. Jul 15 11:06:22.847796 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:06:22.858332 systemd-fsck[777]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 15 11:06:22.861335 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:06:22.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.862924 systemd[1]: Mounting sysroot.mount... Jul 15 11:06:22.868654 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:06:22.868860 systemd[1]: Mounted sysroot.mount. Jul 15 11:06:22.869602 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:06:22.872470 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:06:22.873611 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:06:22.873661 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:06:22.873708 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:06:22.875776 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:06:22.877702 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:06:22.881889 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:06:22.885541 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:06:22.889604 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:06:22.893667 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:06:22.920843 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:06:22.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.922419 systemd[1]: Starting ignition-mount.service... Jul 15 11:06:22.923870 systemd[1]: Starting sysroot-boot.service... Jul 15 11:06:22.928445 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:06:22.936648 ignition[830]: INFO : Ignition 2.14.0 Jul 15 11:06:22.937537 ignition[830]: INFO : Stage: mount Jul 15 11:06:22.937537 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:06:22.937537 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:06:22.940346 ignition[830]: INFO : mount: mount passed Jul 15 11:06:22.940346 ignition[830]: INFO : Ignition finished successfully Jul 15 11:06:22.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.939300 systemd[1]: Finished ignition-mount.service. Jul 15 11:06:22.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:22.942873 systemd[1]: Finished sysroot-boot.service. Jul 15 11:06:23.559070 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:06:23.564642 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (838) Jul 15 11:06:23.566811 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 11:06:23.566839 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:06:23.566864 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:06:23.570078 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:06:23.571571 systemd[1]: Starting ignition-files.service... Jul 15 11:06:23.584825 ignition[858]: INFO : Ignition 2.14.0 Jul 15 11:06:23.584825 ignition[858]: INFO : Stage: files Jul 15 11:06:23.586379 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:06:23.586379 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:06:23.586379 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:06:23.589924 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:06:23.589924 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:06:23.592828 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:06:23.592828 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:06:23.592828 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:06:23.592828 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 11:06:23.592828 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 15 11:06:23.590664 unknown[858]: wrote ssh authorized keys file for user: core Jul 15 11:06:23.693349 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 11:06:23.873806 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 11:06:23.875939 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:06:23.875939 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 15 11:06:24.135240 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:06:24.188475 systemd-networkd[736]: eth0: Gained IPv6LL Jul 15 11:06:24.196726 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 11:06:24.198559 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 15 11:06:24.618434 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 11:06:25.043214 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 11:06:25.043214 ignition[858]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:06:25.047064 ignition[858]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:06:25.106422 ignition[858]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:06:25.108331 ignition[858]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:06:25.108331 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:06:25.108331 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:06:25.108331 ignition[858]: INFO : files: files passed Jul 15 11:06:25.108331 ignition[858]: INFO : Ignition finished successfully Jul 15 11:06:25.123487 kernel: kauditd_printk_skb: 22 callbacks suppressed Jul 15 11:06:25.123509 kernel: audit: type=1130 audit(1752577585.108:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.123520 kernel: audit: type=1130 audit(1752577585.118:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.107876 systemd[1]: Finished ignition-files.service. Jul 15 11:06:25.109866 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:06:25.116350 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:06:25.128490 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:06:25.135140 kernel: audit: type=1130 audit(1752577585.128:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.135161 kernel: audit: type=1131 audit(1752577585.128:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.117396 systemd[1]: Starting ignition-quench.service... Jul 15 11:06:25.136132 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:06:25.118800 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:06:25.120334 systemd[1]: Reached target ignition-complete.target. Jul 15 11:06:25.125228 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:06:25.127349 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:06:25.127465 systemd[1]: Finished ignition-quench.service. Jul 15 11:06:25.142901 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:06:25.143017 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:06:25.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.144876 systemd[1]: Reached target initrd-fs.target. Jul 15 11:06:25.151423 kernel: audit: type=1130 audit(1752577585.144:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.151445 kernel: audit: type=1131 audit(1752577585.144:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.150843 systemd[1]: Reached target initrd.target. Jul 15 11:06:25.152130 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:06:25.152976 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:06:25.167674 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:06:25.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.169601 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:06:25.172918 kernel: audit: type=1130 audit(1752577585.167:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.179178 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:06:25.180096 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:06:25.181558 systemd[1]: Stopped target timers.target. Jul 15 11:06:25.182954 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:06:25.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.183067 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:06:25.188697 kernel: audit: type=1131 audit(1752577585.183:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.184405 systemd[1]: Stopped target initrd.target. Jul 15 11:06:25.188154 systemd[1]: Stopped target basic.target. Jul 15 11:06:25.189417 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:06:25.190822 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:06:25.192183 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:06:25.193694 systemd[1]: Stopped target remote-fs.target. Jul 15 11:06:25.195155 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:06:25.196657 systemd[1]: Stopped target sysinit.target. Jul 15 11:06:25.197988 systemd[1]: Stopped target local-fs.target. Jul 15 11:06:25.199462 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:06:25.200828 systemd[1]: Stopped target swap.target. Jul 15 11:06:25.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.202048 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:06:25.207791 kernel: audit: type=1131 audit(1752577585.202:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.202164 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:06:25.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.203525 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:06:25.212977 kernel: audit: type=1131 audit(1752577585.207:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.207055 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:06:25.207161 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:06:25.208623 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:06:25.208745 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:06:25.212469 systemd[1]: Stopped target paths.target. Jul 15 11:06:25.213686 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:06:25.217682 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:06:25.218697 systemd[1]: Stopped target slices.target. Jul 15 11:06:25.220293 systemd[1]: Stopped target sockets.target. Jul 15 11:06:25.221702 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:06:25.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.221800 systemd[1]: Closed iscsid.socket. Jul 15 11:06:25.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.222943 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:06:25.223052 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:06:25.224417 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:06:25.224510 systemd[1]: Stopped ignition-files.service. Jul 15 11:06:25.226712 systemd[1]: Stopping ignition-mount.service... Jul 15 11:06:25.228766 systemd[1]: Stopping iscsiuio.service... Jul 15 11:06:25.234480 ignition[898]: INFO : Ignition 2.14.0 Jul 15 11:06:25.234480 ignition[898]: INFO : Stage: umount Jul 15 11:06:25.236013 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:06:25.236013 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:06:25.236013 ignition[898]: INFO : umount: umount passed Jul 15 11:06:25.236013 ignition[898]: INFO : Ignition finished successfully Jul 15 11:06:25.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.235971 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:06:25.236711 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:06:25.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.236861 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:06:25.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.238528 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:06:25.238621 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:06:25.242064 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:06:25.242200 systemd[1]: Stopped iscsiuio.service. Jul 15 11:06:25.243762 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:06:25.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.243901 systemd[1]: Stopped ignition-mount.service. Jul 15 11:06:25.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.247151 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:06:25.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.247762 systemd[1]: Stopped target network.target. Jul 15 11:06:25.248578 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:06:25.248611 systemd[1]: Closed iscsiuio.socket. Jul 15 11:06:25.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.250005 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:06:25.250048 systemd[1]: Stopped ignition-disks.service. Jul 15 11:06:25.251320 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:06:25.251361 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:06:25.252808 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:06:25.252851 systemd[1]: Stopped ignition-setup.service. Jul 15 11:06:25.254258 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:06:25.256450 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:06:25.258178 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:06:25.258258 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:06:25.266705 systemd-networkd[736]: eth0: DHCPv6 lease lost Jul 15 11:06:25.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.267791 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:06:25.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.267894 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:06:25.269498 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:06:25.269584 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:06:25.274000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:06:25.270688 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:06:25.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.270717 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:06:25.276000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:06:25.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.272604 systemd[1]: Stopping network-cleanup.service... Jul 15 11:06:25.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.274196 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:06:25.274265 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:06:25.275877 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:06:25.275923 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:06:25.278322 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:06:25.278367 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:06:25.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.279406 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:06:25.285269 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:06:25.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.288407 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:06:25.288587 systemd[1]: Stopped network-cleanup.service. Jul 15 11:06:25.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.290661 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:06:25.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.290811 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:06:25.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.293967 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:06:25.293999 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:06:25.295871 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:06:25.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.295906 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:06:25.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.297324 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:06:25.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.297374 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:06:25.299460 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:06:25.299506 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:06:25.300788 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:06:25.300832 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:06:25.303183 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:06:25.305313 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 11:06:25.305378 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 15 11:06:25.307766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:06:25.307810 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:06:25.308623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:06:25.308690 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:06:25.310904 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 11:06:25.311354 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:06:25.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.311451 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:06:25.356439 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:06:25.356537 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:06:25.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.358237 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:06:25.359470 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:06:25.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:25.359524 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:06:25.361699 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:06:25.368287 systemd[1]: Switching root. Jul 15 11:06:25.383113 iscsid[746]: iscsid shutting down. Jul 15 11:06:25.383889 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Jul 15 11:06:25.383934 systemd-journald[289]: Journal stopped Jul 15 11:06:27.405917 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:06:27.406003 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:06:27.406017 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:06:27.406028 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:06:27.406038 kernel: SELinux: policy capability open_perms=1 Jul 15 11:06:27.406047 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:06:27.406066 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:06:27.406076 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:06:27.406085 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:06:27.406096 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:06:27.406267 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:06:27.406289 systemd[1]: Successfully loaded SELinux policy in 33.811ms. Jul 15 11:06:27.406308 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.824ms. Jul 15 11:06:27.406319 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:06:27.406333 systemd[1]: Detected virtualization kvm. Jul 15 11:06:27.406344 systemd[1]: Detected architecture arm64. Jul 15 11:06:27.406354 systemd[1]: Detected first boot. Jul 15 11:06:27.406365 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:06:27.406391 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:06:27.406403 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:06:27.406420 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:06:27.406432 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:06:27.406443 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:06:27.406456 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:06:27.406466 systemd[1]: Stopped iscsid.service. Jul 15 11:06:27.406477 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 11:06:27.406487 systemd[1]: Stopped initrd-switch-root.service. Jul 15 11:06:27.406497 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 11:06:27.406508 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:06:27.406519 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:06:27.406529 systemd[1]: Created slice system-getty.slice. Jul 15 11:06:27.406541 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:06:27.406552 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:06:27.406562 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:06:27.406572 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:06:27.406583 systemd[1]: Created slice user.slice. Jul 15 11:06:27.406593 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:06:27.406603 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:06:27.406613 systemd[1]: Set up automount boot.automount. Jul 15 11:06:27.406623 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:06:27.406646 systemd[1]: Stopped target initrd-switch-root.target. Jul 15 11:06:27.406662 systemd[1]: Stopped target initrd-fs.target. Jul 15 11:06:27.406675 systemd[1]: Stopped target initrd-root-fs.target. Jul 15 11:06:27.406685 systemd[1]: Reached target integritysetup.target. Jul 15 11:06:27.406697 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:06:27.406707 systemd[1]: Reached target remote-fs.target. Jul 15 11:06:27.406717 systemd[1]: Reached target slices.target. Jul 15 11:06:27.406729 systemd[1]: Reached target swap.target. Jul 15 11:06:27.406741 systemd[1]: Reached target torcx.target. Jul 15 11:06:27.406752 systemd[1]: Reached target veritysetup.target. Jul 15 11:06:27.406762 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:06:27.406772 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:06:27.406782 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:06:27.406793 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:06:27.406803 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:06:27.406815 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:06:27.406825 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:06:27.406837 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:06:27.406847 systemd[1]: Mounting media.mount... Jul 15 11:06:27.406858 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:06:27.406868 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:06:27.406878 systemd[1]: Mounting tmp.mount... Jul 15 11:06:27.406888 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:06:27.406898 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:06:27.406908 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:06:27.406919 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:06:27.406930 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:06:27.406941 systemd[1]: Starting modprobe@drm.service... Jul 15 11:06:27.406951 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:06:27.406962 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:06:27.406972 systemd[1]: Starting modprobe@loop.service... Jul 15 11:06:27.406983 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:06:27.406993 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 11:06:27.407004 systemd[1]: Stopped systemd-fsck-root.service. Jul 15 11:06:27.407014 kernel: fuse: init (API version 7.34) Jul 15 11:06:27.407025 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 11:06:27.407036 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 11:06:27.407046 systemd[1]: Stopped systemd-journald.service. Jul 15 11:06:27.407056 kernel: loop: module loaded Jul 15 11:06:27.407066 systemd[1]: Starting systemd-journald.service... Jul 15 11:06:27.407076 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:06:27.407087 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:06:27.407097 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:06:27.407107 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:06:27.407118 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 11:06:27.407129 systemd[1]: Stopped verity-setup.service. Jul 15 11:06:27.407139 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:06:27.407149 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:06:27.407159 systemd[1]: Mounted media.mount. Jul 15 11:06:27.407169 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:06:27.407183 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:06:27.407197 systemd-journald[1002]: Journal started Jul 15 11:06:27.407244 systemd-journald[1002]: Runtime Journal (/run/log/journal/81bc0e12626749cfa3db8a5522ec692d) is 6.0M, max 48.7M, 42.6M free. Jul 15 11:06:25.441000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 11:06:25.538000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:06:25.538000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:06:25.538000 audit: BPF prog-id=10 op=LOAD Jul 15 11:06:25.538000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:06:25.538000 audit: BPF prog-id=11 op=LOAD Jul 15 11:06:25.538000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:06:25.583000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 15 11:06:25.583000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400011a5c4 a1=400011c708 a2=4000126980 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:06:25.583000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:06:25.584000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 15 11:06:25.584000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400011a699 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:06:25.584000 audit: CWD cwd="/" Jul 15 11:06:25.584000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:06:25.584000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:06:25.584000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:06:27.268000 audit: BPF prog-id=12 op=LOAD Jul 15 11:06:27.268000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:06:27.268000 audit: BPF prog-id=13 op=LOAD Jul 15 11:06:27.268000 audit: BPF prog-id=14 op=LOAD Jul 15 11:06:27.268000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:06:27.268000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:06:27.269000 audit: BPF prog-id=15 op=LOAD Jul 15 11:06:27.269000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:06:27.269000 audit: BPF prog-id=16 op=LOAD Jul 15 11:06:27.269000 audit: BPF prog-id=17 op=LOAD Jul 15 11:06:27.269000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:06:27.269000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:06:27.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.280000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:06:27.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.377000 audit: BPF prog-id=18 op=LOAD Jul 15 11:06:27.377000 audit: BPF prog-id=19 op=LOAD Jul 15 11:06:27.377000 audit: BPF prog-id=20 op=LOAD Jul 15 11:06:27.377000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:06:27.377000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:06:27.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.402000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:06:27.402000 audit[1002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc1a080a0 a2=4000 a3=1 items=0 ppid=1 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:06:27.402000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:06:25.581891 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:06:27.265491 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:06:25.582184 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:06:27.265502 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:06:25.582205 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:06:27.270854 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 11:06:25.582235 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 15 11:06:25.582245 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 15 11:06:25.582276 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 15 11:06:25.582288 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 15 11:06:25.582524 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 15 11:06:25.582566 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:06:25.582579 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:06:25.583335 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 15 11:06:25.583370 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 15 11:06:25.583389 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 Jul 15 11:06:25.583405 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 15 11:06:25.583423 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 Jul 15 11:06:25.583439 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 15 11:06:27.018220 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:06:27.018476 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:06:27.409930 systemd[1]: Started systemd-journald.service. Jul 15 11:06:27.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.018577 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:06:27.018772 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:06:27.018820 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 15 11:06:27.018877 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-07-15T11:06:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 15 11:06:27.410513 systemd[1]: Mounted tmp.mount. Jul 15 11:06:27.411494 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:06:27.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.412597 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:06:27.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.413717 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:06:27.413876 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:06:27.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.414901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:06:27.415019 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:06:27.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.416044 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:06:27.416209 systemd[1]: Finished modprobe@drm.service. Jul 15 11:06:27.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.417295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:06:27.417452 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:06:27.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.418942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:06:27.419096 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:06:27.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.420170 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:06:27.420309 systemd[1]: Finished modprobe@loop.service. Jul 15 11:06:27.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.421636 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:06:27.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.422766 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:06:27.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.423924 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:06:27.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.425399 systemd[1]: Reached target network-pre.target. Jul 15 11:06:27.427316 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:06:27.429298 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:06:27.430177 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:06:27.431773 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:06:27.433884 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:06:27.435014 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:06:27.435968 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:06:27.436840 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:06:27.438362 systemd-journald[1002]: Time spent on flushing to /var/log/journal/81bc0e12626749cfa3db8a5522ec692d is 12.708ms for 998 entries. Jul 15 11:06:27.438362 systemd-journald[1002]: System Journal (/var/log/journal/81bc0e12626749cfa3db8a5522ec692d) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:06:27.466793 systemd-journald[1002]: Received client request to flush runtime journal. Jul 15 11:06:27.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.438057 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:06:27.441171 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:06:27.445327 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:06:27.446478 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:06:27.450605 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:06:27.451551 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:06:27.458099 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:06:27.460814 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:06:27.461896 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:06:27.467739 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:06:27.469287 udevadm[1034]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 15 11:06:27.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.473943 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:06:27.475785 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:06:27.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.493105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:06:27.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.822112 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:06:27.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.823000 audit: BPF prog-id=21 op=LOAD Jul 15 11:06:27.823000 audit: BPF prog-id=22 op=LOAD Jul 15 11:06:27.823000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:06:27.823000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:06:27.824340 systemd[1]: Starting systemd-udevd.service... Jul 15 11:06:27.839875 systemd-udevd[1037]: Using default interface naming scheme 'v252'. Jul 15 11:06:27.853112 systemd[1]: Started systemd-udevd.service. Jul 15 11:06:27.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.855000 audit: BPF prog-id=23 op=LOAD Jul 15 11:06:27.856574 systemd[1]: Starting systemd-networkd.service... Jul 15 11:06:27.870279 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 15 11:06:27.875223 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:06:27.873000 audit: BPF prog-id=24 op=LOAD Jul 15 11:06:27.873000 audit: BPF prog-id=25 op=LOAD Jul 15 11:06:27.873000 audit: BPF prog-id=26 op=LOAD Jul 15 11:06:27.910073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:06:27.914503 systemd[1]: Started systemd-userdbd.service. Jul 15 11:06:27.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.949980 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:06:27.952005 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:06:27.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.962076 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:06:27.978798 systemd-networkd[1049]: lo: Link UP Jul 15 11:06:27.978807 systemd-networkd[1049]: lo: Gained carrier Jul 15 11:06:27.979170 systemd-networkd[1049]: Enumeration completed Jul 15 11:06:27.979277 systemd-networkd[1049]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:06:27.979282 systemd[1]: Started systemd-networkd.service. Jul 15 11:06:27.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.982196 systemd-networkd[1049]: eth0: Link UP Jul 15 11:06:27.982206 systemd-networkd[1049]: eth0: Gained carrier Jul 15 11:06:27.990474 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:06:27.991497 systemd[1]: Reached target cryptsetup.target. Jul 15 11:06:27.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:27.993484 systemd[1]: Starting lvm2-activation.service... Jul 15 11:06:27.997158 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:06:27.998900 systemd-networkd[1049]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:06:28.032465 systemd[1]: Finished lvm2-activation.service. Jul 15 11:06:28.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.033415 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:06:28.034327 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:06:28.034360 systemd[1]: Reached target local-fs.target. Jul 15 11:06:28.035239 systemd[1]: Reached target machines.target. Jul 15 11:06:28.037196 systemd[1]: Starting ldconfig.service... Jul 15 11:06:28.038417 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.038493 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:06:28.039621 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:06:28.041643 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:06:28.043837 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:06:28.045999 systemd[1]: Starting systemd-sysext.service... Jul 15 11:06:28.047176 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) Jul 15 11:06:28.048287 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:06:28.054807 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:06:28.056401 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:06:28.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.068260 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:06:28.068448 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:06:28.117688 kernel: loop0: detected capacity change from 0 to 207008 Jul 15 11:06:28.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.124341 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:06:28.124954 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:06:28.129105 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Jul 15 11:06:28.129105 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters Jul 15 11:06:28.132571 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:06:28.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.135308 systemd[1]: Mounting boot.mount... Jul 15 11:06:28.139280 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:06:28.151005 systemd[1]: Mounted boot.mount. Jul 15 11:06:28.158042 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:06:28.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.168651 kernel: loop1: detected capacity change from 0 to 207008 Jul 15 11:06:28.176855 (sd-sysext)[1089]: Using extensions 'kubernetes'. Jul 15 11:06:28.179038 (sd-sysext)[1089]: Merged extensions into '/usr'. Jul 15 11:06:28.197435 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.198786 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:06:28.200798 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:06:28.202645 systemd[1]: Starting modprobe@loop.service... Jul 15 11:06:28.203542 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.203701 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:06:28.204448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:06:28.204576 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:06:28.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.205985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:06:28.206093 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:06:28.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.207465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:06:28.207570 systemd[1]: Finished modprobe@loop.service. Jul 15 11:06:28.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.208976 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:06:28.209072 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.285705 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:06:28.289150 systemd[1]: Finished ldconfig.service. Jul 15 11:06:28.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.401402 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:06:28.406158 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:06:28.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.407947 systemd[1]: Finished systemd-sysext.service. Jul 15 11:06:28.409853 systemd[1]: Starting ensure-sysext.service... Jul 15 11:06:28.411498 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:06:28.415800 systemd[1]: Reloading. Jul 15 11:06:28.420443 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:06:28.421915 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:06:28.423319 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:06:28.451870 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2025-07-15T11:06:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:06:28.451897 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2025-07-15T11:06:28Z" level=info msg="torcx already run" Jul 15 11:06:28.517604 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:06:28.517624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:06:28.532858 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:06:28.573000 audit: BPF prog-id=27 op=LOAD Jul 15 11:06:28.573000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:06:28.574000 audit: BPF prog-id=28 op=LOAD Jul 15 11:06:28.574000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:06:28.574000 audit: BPF prog-id=29 op=LOAD Jul 15 11:06:28.574000 audit: BPF prog-id=30 op=LOAD Jul 15 11:06:28.574000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:06:28.574000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:06:28.574000 audit: BPF prog-id=31 op=LOAD Jul 15 11:06:28.574000 audit: BPF prog-id=32 op=LOAD Jul 15 11:06:28.574000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:06:28.574000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:06:28.576000 audit: BPF prog-id=33 op=LOAD Jul 15 11:06:28.576000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:06:28.576000 audit: BPF prog-id=34 op=LOAD Jul 15 11:06:28.576000 audit: BPF prog-id=35 op=LOAD Jul 15 11:06:28.576000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:06:28.576000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:06:28.578594 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:06:28.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.583005 systemd[1]: Starting audit-rules.service... Jul 15 11:06:28.584881 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:06:28.586763 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:06:28.588000 audit: BPF prog-id=36 op=LOAD Jul 15 11:06:28.590380 systemd[1]: Starting systemd-resolved.service... Jul 15 11:06:28.592000 audit: BPF prog-id=37 op=LOAD Jul 15 11:06:28.594078 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:06:28.595869 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:06:28.598738 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:06:28.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.599000 audit[1166]: SYSTEM_BOOT pid=1166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.602954 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.604172 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:06:28.606002 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:06:28.608012 systemd[1]: Starting modprobe@loop.service... Jul 15 11:06:28.608834 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.609020 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:06:28.609154 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:06:28.610299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:06:28.610414 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:06:28.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.611744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:06:28.611869 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:06:28.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.613312 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:06:28.613435 systemd[1]: Finished modprobe@loop.service. Jul 15 11:06:28.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.616159 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.617397 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:06:28.619431 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:06:28.621424 systemd[1]: Starting modprobe@loop.service... Jul 15 11:06:28.622234 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.622361 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:06:28.622457 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:06:28.623284 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:06:28.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.624613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:06:28.624766 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:06:28.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.625908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:06:28.626018 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:06:28.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.627389 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:06:28.627497 systemd[1]: Finished modprobe@loop.service. Jul 15 11:06:28.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.629605 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:06:28.629861 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.632977 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.635091 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:06:28.637510 systemd[1]: Starting modprobe@drm.service... Jul 15 11:06:28.642394 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:06:28.644498 systemd[1]: Starting modprobe@loop.service... Jul 15 11:06:28.645364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.645508 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:06:28.646899 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:06:28.647879 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:06:28.649055 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:06:28.651124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:06:28.651232 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:06:28.652413 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:06:28.652518 systemd[1]: Finished modprobe@drm.service. Jul 15 11:06:28.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.653749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:06:28.653858 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:06:28.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:06:28.656378 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:06:28.656492 systemd[1]: Finished modprobe@loop.service. Jul 15 11:06:28.658000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:06:28.658000 audit[1186]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff7c1b260 a2=420 a3=0 items=0 ppid=1155 pid=1186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:06:28.658000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:06:28.661443 augenrules[1186]: No rules Jul 15 11:06:28.662438 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:06:28.208925 systemd-timesyncd[1165]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:06:28.232731 systemd-journald[1002]: Time jumped backwards, rotating. Jul 15 11:06:28.208974 systemd-timesyncd[1165]: Initial clock synchronization to Tue 2025-07-15 11:06:28.208854 UTC. Jul 15 11:06:28.210341 systemd-resolved[1159]: Positive Trust Anchors: Jul 15 11:06:28.210348 systemd-resolved[1159]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:06:28.210374 systemd-resolved[1159]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:06:28.210700 systemd[1]: Finished audit-rules.service. Jul 15 11:06:28.213132 systemd[1]: Reached target time-set.target. Jul 15 11:06:28.214036 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:06:28.214074 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.215128 systemd[1]: Starting systemd-update-done.service... Jul 15 11:06:28.216334 systemd[1]: Finished ensure-sysext.service. Jul 15 11:06:28.222404 systemd[1]: Finished systemd-update-done.service. Jul 15 11:06:28.229338 systemd-resolved[1159]: Defaulting to hostname 'linux'. Jul 15 11:06:28.230981 systemd[1]: Started systemd-resolved.service. Jul 15 11:06:28.232017 systemd[1]: Reached target network.target. Jul 15 11:06:28.232880 systemd[1]: Reached target nss-lookup.target. Jul 15 11:06:28.233645 systemd[1]: Reached target sysinit.target. Jul 15 11:06:28.234422 systemd[1]: Started motdgen.path. Jul 15 11:06:28.235133 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:06:28.236306 systemd[1]: Started logrotate.timer. Jul 15 11:06:28.237110 systemd[1]: Started mdadm.timer. Jul 15 11:06:28.237781 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:06:28.238586 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:06:28.238618 systemd[1]: Reached target paths.target. Jul 15 11:06:28.239302 systemd[1]: Reached target timers.target. Jul 15 11:06:28.240375 systemd[1]: Listening on dbus.socket. Jul 15 11:06:28.242084 systemd[1]: Starting docker.socket... Jul 15 11:06:28.245042 systemd[1]: Listening on sshd.socket. Jul 15 11:06:28.245897 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:06:28.246355 systemd[1]: Listening on docker.socket. Jul 15 11:06:28.247198 systemd[1]: Reached target sockets.target. Jul 15 11:06:28.247968 systemd[1]: Reached target basic.target. Jul 15 11:06:28.248756 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.248787 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:06:28.249737 systemd[1]: Starting containerd.service... Jul 15 11:06:28.251378 systemd[1]: Starting dbus.service... Jul 15 11:06:28.253085 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:06:28.254957 systemd[1]: Starting extend-filesystems.service... Jul 15 11:06:28.255875 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:06:28.257213 systemd[1]: Starting motdgen.service... Jul 15 11:06:28.258870 systemd[1]: Starting prepare-helm.service... Jul 15 11:06:28.260951 jq[1198]: false Jul 15 11:06:28.261664 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:06:28.263466 systemd[1]: Starting sshd-keygen.service... Jul 15 11:06:28.267480 systemd[1]: Starting systemd-logind.service... Jul 15 11:06:28.268242 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:06:28.268325 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:06:28.269619 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 11:06:28.270232 systemd[1]: Starting update-engine.service... Jul 15 11:06:28.271944 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:06:28.274252 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:06:28.274647 jq[1216]: true Jul 15 11:06:28.274420 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:06:28.275410 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:06:28.275587 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:06:28.283088 jq[1220]: true Jul 15 11:06:28.285618 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:06:28.285781 systemd[1]: Finished motdgen.service. Jul 15 11:06:28.289994 extend-filesystems[1199]: Found loop1 Jul 15 11:06:28.289994 extend-filesystems[1199]: Found vda Jul 15 11:06:28.291660 extend-filesystems[1199]: Found vda1 Jul 15 11:06:28.291660 extend-filesystems[1199]: Found vda2 Jul 15 11:06:28.291660 extend-filesystems[1199]: Found vda3 Jul 15 11:06:28.291660 extend-filesystems[1199]: Found usr Jul 15 11:06:28.291660 extend-filesystems[1199]: Found vda4 Jul 15 11:06:28.291660 extend-filesystems[1199]: Found vda6 Jul 15 11:06:28.291660 extend-filesystems[1199]: Found vda7 Jul 15 11:06:28.291660 extend-filesystems[1199]: Found vda9 Jul 15 11:06:28.291660 extend-filesystems[1199]: Checking size of /dev/vda9 Jul 15 11:06:28.301083 tar[1219]: linux-arm64/LICENSE Jul 15 11:06:28.301083 tar[1219]: linux-arm64/helm Jul 15 11:06:28.310811 extend-filesystems[1199]: Resized partition /dev/vda9 Jul 15 11:06:28.320313 extend-filesystems[1240]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:06:28.325184 dbus-daemon[1197]: [system] SELinux support is enabled Jul 15 11:06:28.325333 systemd[1]: Started dbus.service. Jul 15 11:06:28.328335 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:06:28.328355 systemd[1]: Reached target system-config.target. Jul 15 11:06:28.329381 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:06:28.329401 systemd[1]: Reached target user-config.target. Jul 15 11:06:28.338219 systemd-logind[1213]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 11:06:28.339187 systemd-logind[1213]: New seat seat0. Jul 15 11:06:28.342559 bash[1248]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:06:28.343422 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:06:28.344861 systemd[1]: Started systemd-logind.service. Jul 15 11:06:28.346625 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:06:28.357869 update_engine[1215]: I0715 11:06:28.349380 1215 main.cc:92] Flatcar Update Engine starting Jul 15 11:06:28.364994 systemd[1]: Started update-engine.service. Jul 15 11:06:28.365261 update_engine[1215]: I0715 11:06:28.365242 1215 update_check_scheduler.cc:74] Next update check in 4m37s Jul 15 11:06:28.367942 systemd[1]: Started locksmithd.service. Jul 15 11:06:28.370536 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:06:28.381014 extend-filesystems[1240]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:06:28.381014 extend-filesystems[1240]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:06:28.381014 extend-filesystems[1240]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:06:28.384538 extend-filesystems[1199]: Resized filesystem in /dev/vda9 Jul 15 11:06:28.383780 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:06:28.383946 systemd[1]: Finished extend-filesystems.service. Jul 15 11:06:28.402886 env[1221]: time="2025-07-15T11:06:28.402829725Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:06:28.421740 env[1221]: time="2025-07-15T11:06:28.421697685Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:06:28.422031 env[1221]: time="2025-07-15T11:06:28.422007765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425031765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425059125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425251845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425274645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425286765Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425296085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425363045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425645645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425777325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:06:28.425941 env[1221]: time="2025-07-15T11:06:28.425792845Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:06:28.426205 env[1221]: time="2025-07-15T11:06:28.425846245Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:06:28.426205 env[1221]: time="2025-07-15T11:06:28.425858525Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:06:28.430364 env[1221]: time="2025-07-15T11:06:28.430331005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:06:28.430419 env[1221]: time="2025-07-15T11:06:28.430365765Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:06:28.430419 env[1221]: time="2025-07-15T11:06:28.430379205Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:06:28.430474 env[1221]: time="2025-07-15T11:06:28.430418365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430474 env[1221]: time="2025-07-15T11:06:28.430434445Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430474 env[1221]: time="2025-07-15T11:06:28.430447885Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430474 env[1221]: time="2025-07-15T11:06:28.430459805Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430880 env[1221]: time="2025-07-15T11:06:28.430852885Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430880 env[1221]: time="2025-07-15T11:06:28.430879805Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430942 env[1221]: time="2025-07-15T11:06:28.430893165Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430942 env[1221]: time="2025-07-15T11:06:28.430905205Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.430942 env[1221]: time="2025-07-15T11:06:28.430927925Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:06:28.431077 env[1221]: time="2025-07-15T11:06:28.431049285Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:06:28.431170 env[1221]: time="2025-07-15T11:06:28.431137765Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:06:28.431459 env[1221]: time="2025-07-15T11:06:28.431434605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:06:28.431578 env[1221]: time="2025-07-15T11:06:28.431559085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.431643 env[1221]: time="2025-07-15T11:06:28.431628005Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:06:28.431890 env[1221]: time="2025-07-15T11:06:28.431803205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.431975 env[1221]: time="2025-07-15T11:06:28.431959285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432049 env[1221]: time="2025-07-15T11:06:28.432034845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432196 env[1221]: time="2025-07-15T11:06:28.432180845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432267 env[1221]: time="2025-07-15T11:06:28.432253125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432327 env[1221]: time="2025-07-15T11:06:28.432314005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432382 env[1221]: time="2025-07-15T11:06:28.432369885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432437 env[1221]: time="2025-07-15T11:06:28.432423765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432504 env[1221]: time="2025-07-15T11:06:28.432490845Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:06:28.432723 env[1221]: time="2025-07-15T11:06:28.432701965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432795 env[1221]: time="2025-07-15T11:06:28.432781165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432850 env[1221]: time="2025-07-15T11:06:28.432836845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.432906 env[1221]: time="2025-07-15T11:06:28.432893085Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:06:28.432966 env[1221]: time="2025-07-15T11:06:28.432949365Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:06:28.433020 env[1221]: time="2025-07-15T11:06:28.433006725Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:06:28.433130 env[1221]: time="2025-07-15T11:06:28.433112405Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:06:28.433215 env[1221]: time="2025-07-15T11:06:28.433200005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:06:28.433489 env[1221]: time="2025-07-15T11:06:28.433439605Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:06:28.434094 env[1221]: time="2025-07-15T11:06:28.433848925Z" level=info msg="Connect containerd service" Jul 15 11:06:28.434145 env[1221]: time="2025-07-15T11:06:28.433890325Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:06:28.435828 env[1221]: time="2025-07-15T11:06:28.435761165Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:06:28.436147 env[1221]: time="2025-07-15T11:06:28.436129845Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:06:28.436188 env[1221]: time="2025-07-15T11:06:28.436171085Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:06:28.436231 env[1221]: time="2025-07-15T11:06:28.436220125Z" level=info msg="containerd successfully booted in 0.035884s" Jul 15 11:06:28.436290 systemd[1]: Started containerd.service. Jul 15 11:06:28.436365 env[1221]: time="2025-07-15T11:06:28.436345525Z" level=info msg="Start subscribing containerd event" Jul 15 11:06:28.436408 env[1221]: time="2025-07-15T11:06:28.436383605Z" level=info msg="Start recovering state" Jul 15 11:06:28.436456 env[1221]: time="2025-07-15T11:06:28.436443605Z" level=info msg="Start event monitor" Jul 15 11:06:28.436484 env[1221]: time="2025-07-15T11:06:28.436462325Z" level=info msg="Start snapshots syncer" Jul 15 11:06:28.436484 env[1221]: time="2025-07-15T11:06:28.436474765Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:06:28.436529 env[1221]: time="2025-07-15T11:06:28.436481885Z" level=info msg="Start streaming server" Jul 15 11:06:28.447775 locksmithd[1249]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:06:28.728085 tar[1219]: linux-arm64/README.md Jul 15 11:06:28.732421 systemd[1]: Finished prepare-helm.service. Jul 15 11:06:29.301695 systemd-networkd[1049]: eth0: Gained IPv6LL Jul 15 11:06:29.303358 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:06:29.304618 systemd[1]: Reached target network-online.target. Jul 15 11:06:29.306897 systemd[1]: Starting kubelet.service... Jul 15 11:06:29.536619 sshd_keygen[1217]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:06:29.554440 systemd[1]: Finished sshd-keygen.service. Jul 15 11:06:29.556835 systemd[1]: Starting issuegen.service... Jul 15 11:06:29.561561 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:06:29.561711 systemd[1]: Finished issuegen.service. Jul 15 11:06:29.563857 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:06:29.569633 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:06:29.571855 systemd[1]: Started getty@tty1.service. Jul 15 11:06:29.573908 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 15 11:06:29.574930 systemd[1]: Reached target getty.target. Jul 15 11:06:29.895354 systemd[1]: Started kubelet.service. Jul 15 11:06:29.896803 systemd[1]: Reached target multi-user.target. Jul 15 11:06:29.899024 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:06:29.906098 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:06:29.906242 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:06:29.907405 systemd[1]: Startup finished in 571ms (kernel) + 4.836s (initrd) + 4.957s (userspace) = 10.364s. Jul 15 11:06:30.384578 kubelet[1279]: E0715 11:06:30.384439 1279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:06:30.386248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:06:30.386374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:06:32.852018 systemd[1]: Created slice system-sshd.slice. Jul 15 11:06:32.853106 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:57576.service. Jul 15 11:06:32.899495 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 57576 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:06:32.903968 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:06:32.913075 systemd[1]: Created slice user-500.slice. Jul 15 11:06:32.914149 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:06:32.916602 systemd-logind[1213]: New session 1 of user core. Jul 15 11:06:32.922129 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:06:32.923329 systemd[1]: Starting user@500.service... Jul 15 11:06:32.926208 (systemd)[1292]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:06:32.987204 systemd[1292]: Queued start job for default target default.target. Jul 15 11:06:32.987696 systemd[1292]: Reached target paths.target. Jul 15 11:06:32.987726 systemd[1292]: Reached target sockets.target. Jul 15 11:06:32.987737 systemd[1292]: Reached target timers.target. Jul 15 11:06:32.987747 systemd[1292]: Reached target basic.target. Jul 15 11:06:32.987787 systemd[1292]: Reached target default.target. Jul 15 11:06:32.987810 systemd[1292]: Startup finished in 56ms. Jul 15 11:06:32.987872 systemd[1]: Started user@500.service. Jul 15 11:06:32.988804 systemd[1]: Started session-1.scope. Jul 15 11:06:33.041678 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:57592.service. Jul 15 11:06:33.080236 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 57592 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:06:33.081431 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:06:33.085018 systemd-logind[1213]: New session 2 of user core. Jul 15 11:06:33.086178 systemd[1]: Started session-2.scope. Jul 15 11:06:33.141892 sshd[1301]: pam_unix(sshd:session): session closed for user core Jul 15 11:06:33.145020 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:57592.service: Deactivated successfully. Jul 15 11:06:33.145611 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:06:33.146063 systemd-logind[1213]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:06:33.146989 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:57598.service. Jul 15 11:06:33.147676 systemd-logind[1213]: Removed session 2. Jul 15 11:06:33.181947 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 57598 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:06:33.183170 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:06:33.186286 systemd-logind[1213]: New session 3 of user core. Jul 15 11:06:33.187056 systemd[1]: Started session-3.scope. Jul 15 11:06:33.236108 sshd[1307]: pam_unix(sshd:session): session closed for user core Jul 15 11:06:33.239659 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:57602.service. Jul 15 11:06:33.240309 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:57598.service: Deactivated successfully. Jul 15 11:06:33.240918 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:06:33.241423 systemd-logind[1213]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:06:33.242173 systemd-logind[1213]: Removed session 3. Jul 15 11:06:33.274971 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 57602 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:06:33.276103 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:06:33.279391 systemd-logind[1213]: New session 4 of user core. Jul 15 11:06:33.280181 systemd[1]: Started session-4.scope. Jul 15 11:06:33.334342 sshd[1312]: pam_unix(sshd:session): session closed for user core Jul 15 11:06:33.337913 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:57604.service. Jul 15 11:06:33.340072 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:57602.service: Deactivated successfully. Jul 15 11:06:33.340760 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:06:33.341417 systemd-logind[1213]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:06:33.343420 systemd-logind[1213]: Removed session 4. Jul 15 11:06:33.382053 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 57604 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:06:33.383248 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:06:33.388048 systemd-logind[1213]: New session 5 of user core. Jul 15 11:06:33.388976 systemd[1]: Started session-5.scope. Jul 15 11:06:33.485829 sudo[1323]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:06:33.486053 sudo[1323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:06:33.548287 systemd[1]: Starting docker.service... Jul 15 11:06:33.641064 env[1335]: time="2025-07-15T11:06:33.641000245Z" level=info msg="Starting up" Jul 15 11:06:33.642656 env[1335]: time="2025-07-15T11:06:33.642619205Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:06:33.642656 env[1335]: time="2025-07-15T11:06:33.642646645Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:06:33.642724 env[1335]: time="2025-07-15T11:06:33.642665845Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:06:33.642724 env[1335]: time="2025-07-15T11:06:33.642676205Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:06:33.645279 env[1335]: time="2025-07-15T11:06:33.645253725Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:06:33.645383 env[1335]: time="2025-07-15T11:06:33.645368925Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:06:33.645446 env[1335]: time="2025-07-15T11:06:33.645430445Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:06:33.645506 env[1335]: time="2025-07-15T11:06:33.645493685Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:06:33.768272 env[1335]: time="2025-07-15T11:06:33.768179045Z" level=info msg="Loading containers: start." Jul 15 11:06:33.890556 kernel: Initializing XFRM netlink socket Jul 15 11:06:33.912823 env[1335]: time="2025-07-15T11:06:33.912777845Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:06:33.972133 systemd-networkd[1049]: docker0: Link UP Jul 15 11:06:33.992864 env[1335]: time="2025-07-15T11:06:33.992817285Z" level=info msg="Loading containers: done." Jul 15 11:06:34.008396 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1752308694-merged.mount: Deactivated successfully. Jul 15 11:06:34.014862 env[1335]: time="2025-07-15T11:06:34.014330125Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:06:34.014862 env[1335]: time="2025-07-15T11:06:34.014492965Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:06:34.014862 env[1335]: time="2025-07-15T11:06:34.014610125Z" level=info msg="Daemon has completed initialization" Jul 15 11:06:34.040750 systemd[1]: Started docker.service. Jul 15 11:06:34.047758 env[1335]: time="2025-07-15T11:06:34.047710285Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:06:34.698007 env[1221]: time="2025-07-15T11:06:34.697947205Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 15 11:06:35.284791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100136150.mount: Deactivated successfully. Jul 15 11:06:36.465854 env[1221]: time="2025-07-15T11:06:36.465790565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:36.467634 env[1221]: time="2025-07-15T11:06:36.467604365Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:36.469235 env[1221]: time="2025-07-15T11:06:36.469209805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:36.470874 env[1221]: time="2025-07-15T11:06:36.470846325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:36.472469 env[1221]: time="2025-07-15T11:06:36.472430885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 15 11:06:36.473019 env[1221]: time="2025-07-15T11:06:36.472997445Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 15 11:06:37.748089 env[1221]: time="2025-07-15T11:06:37.748032525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:37.749536 env[1221]: time="2025-07-15T11:06:37.749483285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:37.751457 env[1221]: time="2025-07-15T11:06:37.751431085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:37.753150 env[1221]: time="2025-07-15T11:06:37.753123805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:37.754758 env[1221]: time="2025-07-15T11:06:37.754726965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 15 11:06:37.755353 env[1221]: time="2025-07-15T11:06:37.755330205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 15 11:06:39.036492 env[1221]: time="2025-07-15T11:06:39.036441325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:39.038196 env[1221]: time="2025-07-15T11:06:39.038153125Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:39.040178 env[1221]: time="2025-07-15T11:06:39.040141685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:39.042541 env[1221]: time="2025-07-15T11:06:39.042490125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:39.043301 env[1221]: time="2025-07-15T11:06:39.043266285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 15 11:06:39.044083 env[1221]: time="2025-07-15T11:06:39.044057405Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 15 11:06:39.974878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581504188.mount: Deactivated successfully. Jul 15 11:06:40.449255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:06:40.449420 systemd[1]: Stopped kubelet.service. Jul 15 11:06:40.450806 systemd[1]: Starting kubelet.service... Jul 15 11:06:40.466322 env[1221]: time="2025-07-15T11:06:40.466269445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:40.473849 env[1221]: time="2025-07-15T11:06:40.473805725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:40.475260 env[1221]: time="2025-07-15T11:06:40.475212645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:40.476664 env[1221]: time="2025-07-15T11:06:40.476634565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:40.477087 env[1221]: time="2025-07-15T11:06:40.477044205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 15 11:06:40.477539 env[1221]: time="2025-07-15T11:06:40.477491485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 11:06:40.552012 systemd[1]: Started kubelet.service. Jul 15 11:06:40.583672 kubelet[1468]: E0715 11:06:40.583614 1468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:06:40.586202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:06:40.586338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:06:41.164384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034241797.mount: Deactivated successfully. Jul 15 11:06:42.086414 env[1221]: time="2025-07-15T11:06:42.086363885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.088010 env[1221]: time="2025-07-15T11:06:42.087980885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.090063 env[1221]: time="2025-07-15T11:06:42.090037005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.092321 env[1221]: time="2025-07-15T11:06:42.092294965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.093003 env[1221]: time="2025-07-15T11:06:42.092971285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 11:06:42.093647 env[1221]: time="2025-07-15T11:06:42.093608605Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:06:42.607476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531101592.mount: Deactivated successfully. Jul 15 11:06:42.611648 env[1221]: time="2025-07-15T11:06:42.611584405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.613019 env[1221]: time="2025-07-15T11:06:42.612986205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.614574 env[1221]: time="2025-07-15T11:06:42.614550885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.616033 env[1221]: time="2025-07-15T11:06:42.615997605Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:42.616569 env[1221]: time="2025-07-15T11:06:42.616545325Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 11:06:42.617078 env[1221]: time="2025-07-15T11:06:42.617026325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 11:06:43.088726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981875640.mount: Deactivated successfully. Jul 15 11:06:45.019560 env[1221]: time="2025-07-15T11:06:45.019489805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:45.020979 env[1221]: time="2025-07-15T11:06:45.020950285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:45.022823 env[1221]: time="2025-07-15T11:06:45.022795565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:45.024751 env[1221]: time="2025-07-15T11:06:45.024720925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:45.025678 env[1221]: time="2025-07-15T11:06:45.025650925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 15 11:06:50.113427 systemd[1]: Stopped kubelet.service. Jul 15 11:06:50.115329 systemd[1]: Starting kubelet.service... Jul 15 11:06:50.137987 systemd[1]: Reloading. Jul 15 11:06:50.187282 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-07-15T11:06:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:06:50.187682 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-07-15T11:06:50Z" level=info msg="torcx already run" Jul 15 11:06:50.269874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:06:50.270079 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:06:50.285424 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:06:50.349894 systemd[1]: Started kubelet.service. Jul 15 11:06:50.351332 systemd[1]: Stopping kubelet.service... Jul 15 11:06:50.351854 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:06:50.352120 systemd[1]: Stopped kubelet.service. Jul 15 11:06:50.353598 systemd[1]: Starting kubelet.service... Jul 15 11:06:50.448740 systemd[1]: Started kubelet.service. Jul 15 11:06:50.490488 kubelet[1569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:06:50.490488 kubelet[1569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:06:50.490488 kubelet[1569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:06:50.490819 kubelet[1569]: I0715 11:06:50.490556 1569 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:06:51.319439 kubelet[1569]: I0715 11:06:51.319388 1569 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 11:06:51.319439 kubelet[1569]: I0715 11:06:51.319427 1569 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:06:51.319766 kubelet[1569]: I0715 11:06:51.319736 1569 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 11:06:51.365153 kubelet[1569]: E0715 11:06:51.365082 1569 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:51.365896 kubelet[1569]: I0715 11:06:51.365864 1569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:06:51.373052 kubelet[1569]: E0715 11:06:51.373016 1569 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:06:51.373167 kubelet[1569]: I0715 11:06:51.373152 1569 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:06:51.376228 kubelet[1569]: I0715 11:06:51.376204 1569 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:06:51.377669 kubelet[1569]: I0715 11:06:51.377630 1569 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:06:51.377933 kubelet[1569]: I0715 11:06:51.377759 1569 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:06:51.378169 kubelet[1569]: I0715 11:06:51.378155 1569 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:06:51.378227 kubelet[1569]: I0715 11:06:51.378219 1569 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 11:06:51.378542 kubelet[1569]: I0715 11:06:51.378512 1569 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:06:51.383926 kubelet[1569]: I0715 11:06:51.383905 1569 kubelet.go:446] "Attempting to sync node with API server" Jul 15 11:06:51.384071 kubelet[1569]: I0715 11:06:51.384026 1569 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:06:51.384172 kubelet[1569]: I0715 11:06:51.384159 1569 kubelet.go:352] "Adding apiserver pod source" Jul 15 11:06:51.384235 kubelet[1569]: I0715 11:06:51.384226 1569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:06:51.401040 kubelet[1569]: I0715 11:06:51.401018 1569 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:06:51.401321 kubelet[1569]: W0715 11:06:51.401283 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:51.401375 kubelet[1569]: E0715 11:06:51.401333 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:51.402970 kubelet[1569]: I0715 11:06:51.402943 1569 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:06:51.403093 kubelet[1569]: W0715 11:06:51.403089 1569 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:06:51.403357 kubelet[1569]: W0715 11:06:51.403316 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:51.403404 kubelet[1569]: E0715 11:06:51.403369 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:51.404191 kubelet[1569]: I0715 11:06:51.404175 1569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:06:51.404238 kubelet[1569]: I0715 11:06:51.404208 1569 server.go:1287] "Started kubelet" Jul 15 11:06:51.414980 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 15 11:06:51.415140 kubelet[1569]: I0715 11:06:51.415104 1569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:06:51.417931 kubelet[1569]: I0715 11:06:51.417860 1569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:06:51.418223 kubelet[1569]: I0715 11:06:51.418202 1569 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:06:51.418369 kubelet[1569]: I0715 11:06:51.418348 1569 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:06:51.419300 kubelet[1569]: I0715 11:06:51.419277 1569 server.go:479] "Adding debug handlers to kubelet server" Jul 15 11:06:51.420219 kubelet[1569]: I0715 11:06:51.420194 1569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:06:51.424542 kubelet[1569]: E0715 11:06:51.424501 1569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:51.424677 kubelet[1569]: I0715 11:06:51.424664 1569 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:06:51.424937 kubelet[1569]: I0715 11:06:51.424916 1569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:06:51.425073 kubelet[1569]: I0715 11:06:51.425060 1569 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:06:51.425489 kubelet[1569]: W0715 11:06:51.425435 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:51.425648 kubelet[1569]: E0715 11:06:51.425626 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:51.426061 kubelet[1569]: E0715 11:06:51.425812 1569 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852680d510ca47d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:06:51.404190845 +0000 UTC m=+0.952004841,LastTimestamp:2025-07-15 11:06:51.404190845 +0000 UTC m=+0.952004841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:06:51.427010 kubelet[1569]: I0715 11:06:51.426989 1569 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:06:51.427113 kubelet[1569]: I0715 11:06:51.427100 1569 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:06:51.427248 kubelet[1569]: I0715 11:06:51.427222 1569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:06:51.427894 kubelet[1569]: E0715 11:06:51.427853 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Jul 15 11:06:51.430051 kubelet[1569]: E0715 11:06:51.427134 1569 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:06:51.441031 kubelet[1569]: I0715 11:06:51.440993 1569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:06:51.441361 kubelet[1569]: I0715 11:06:51.441335 1569 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:06:51.441361 kubelet[1569]: I0715 11:06:51.441360 1569 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:06:51.441438 kubelet[1569]: I0715 11:06:51.441389 1569 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:06:51.442435 kubelet[1569]: I0715 11:06:51.442404 1569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:06:51.442483 kubelet[1569]: I0715 11:06:51.442438 1569 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 11:06:51.442483 kubelet[1569]: I0715 11:06:51.442457 1569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:06:51.442483 kubelet[1569]: I0715 11:06:51.442465 1569 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 11:06:51.442589 kubelet[1569]: E0715 11:06:51.442533 1569 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:06:51.442982 kubelet[1569]: W0715 11:06:51.442946 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:51.443049 kubelet[1569]: E0715 11:06:51.442997 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:51.525139 kubelet[1569]: E0715 11:06:51.525100 1569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:51.529372 kubelet[1569]: I0715 11:06:51.529350 1569 policy_none.go:49] "None policy: Start" Jul 15 11:06:51.529372 kubelet[1569]: I0715 11:06:51.529376 1569 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:06:51.529499 kubelet[1569]: I0715 11:06:51.529389 1569 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:06:51.534502 systemd[1]: Created slice kubepods.slice. Jul 15 11:06:51.538745 systemd[1]: Created slice kubepods-burstable.slice. Jul 15 11:06:51.541213 systemd[1]: Created slice kubepods-besteffort.slice. Jul 15 11:06:51.543527 kubelet[1569]: E0715 11:06:51.543496 1569 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:06:51.552271 kubelet[1569]: I0715 11:06:51.552234 1569 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:06:51.552467 kubelet[1569]: I0715 11:06:51.552414 1569 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:06:51.552467 kubelet[1569]: I0715 11:06:51.552434 1569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:06:51.552839 kubelet[1569]: I0715 11:06:51.552819 1569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:06:51.554094 kubelet[1569]: E0715 11:06:51.554053 1569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:06:51.554094 kubelet[1569]: E0715 11:06:51.554095 1569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:06:51.628596 kubelet[1569]: E0715 11:06:51.628485 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Jul 15 11:06:51.653938 kubelet[1569]: I0715 11:06:51.653908 1569 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:06:51.654361 kubelet[1569]: E0715 11:06:51.654333 1569 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jul 15 11:06:51.751216 systemd[1]: Created slice kubepods-burstable-pod4b890336b4d95055aa6c46f425648bc4.slice. Jul 15 11:06:51.770237 kubelet[1569]: E0715 11:06:51.770191 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:51.771672 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 15 11:06:51.787651 kubelet[1569]: E0715 11:06:51.787624 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:51.790061 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 15 11:06:51.791549 kubelet[1569]: E0715 11:06:51.791528 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:51.826809 kubelet[1569]: I0715 11:06:51.826773 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b890336b4d95055aa6c46f425648bc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b890336b4d95055aa6c46f425648bc4\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:51.826996 kubelet[1569]: I0715 11:06:51.826979 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b890336b4d95055aa6c46f425648bc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b890336b4d95055aa6c46f425648bc4\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:51.827125 kubelet[1569]: I0715 11:06:51.827107 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b890336b4d95055aa6c46f425648bc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b890336b4d95055aa6c46f425648bc4\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:51.827228 kubelet[1569]: I0715 11:06:51.827214 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:51.827327 kubelet[1569]: I0715 11:06:51.827313 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:51.827434 kubelet[1569]: I0715 11:06:51.827420 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:51.827559 kubelet[1569]: I0715 11:06:51.827544 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:51.827676 kubelet[1569]: I0715 11:06:51.827660 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:06:51.827767 kubelet[1569]: I0715 11:06:51.827754 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:51.855811 kubelet[1569]: I0715 11:06:51.855786 1569 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:06:51.856159 kubelet[1569]: E0715 11:06:51.856132 1569 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jul 15 11:06:52.029755 kubelet[1569]: E0715 11:06:52.029709 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Jul 15 11:06:52.071117 kubelet[1569]: E0715 11:06:52.071083 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:52.071731 env[1221]: time="2025-07-15T11:06:52.071690645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b890336b4d95055aa6c46f425648bc4,Namespace:kube-system,Attempt:0,}" Jul 15 11:06:52.088965 kubelet[1569]: E0715 11:06:52.088939 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:52.089495 env[1221]: time="2025-07-15T11:06:52.089438085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 15 11:06:52.092881 kubelet[1569]: E0715 11:06:52.092848 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:52.093203 env[1221]: time="2025-07-15T11:06:52.093170965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 15 11:06:52.258156 kubelet[1569]: I0715 11:06:52.258128 1569 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:06:52.258636 kubelet[1569]: E0715 11:06:52.258608 1569 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jul 15 11:06:52.396156 kubelet[1569]: W0715 11:06:52.396072 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:52.396156 kubelet[1569]: E0715 11:06:52.396115 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:52.469847 kubelet[1569]: W0715 11:06:52.469785 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:52.469989 kubelet[1569]: E0715 11:06:52.469849 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:52.483508 kubelet[1569]: W0715 11:06:52.483464 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:52.483573 kubelet[1569]: E0715 11:06:52.483545 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:52.588195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252345064.mount: Deactivated successfully. Jul 15 11:06:52.595062 env[1221]: time="2025-07-15T11:06:52.595018205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.600251 env[1221]: time="2025-07-15T11:06:52.600188605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.601158 env[1221]: time="2025-07-15T11:06:52.601131405Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.603577 env[1221]: time="2025-07-15T11:06:52.603547925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.604729 env[1221]: time="2025-07-15T11:06:52.604693925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.606252 env[1221]: time="2025-07-15T11:06:52.606225085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.607694 env[1221]: time="2025-07-15T11:06:52.607667205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.609925 env[1221]: time="2025-07-15T11:06:52.609893045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.611444 env[1221]: time="2025-07-15T11:06:52.611403645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.613325 env[1221]: time="2025-07-15T11:06:52.613280445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.614748 env[1221]: time="2025-07-15T11:06:52.614716125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.615431 env[1221]: time="2025-07-15T11:06:52.615397565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:06:52.658001 kubelet[1569]: W0715 11:06:52.657894 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jul 15 11:06:52.658001 kubelet[1569]: E0715 11:06:52.657956 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661786045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661816085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661826125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661998605Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d03f8b4285ddd5ebf938e8d77c3d75271e02c2b24206512b76500aee3423d0e pid=1628 runtime=io.containerd.runc.v2 Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661542205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661583205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661593965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:06:52.662266 env[1221]: time="2025-07-15T11:06:52.661844845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be525cf45b4ea6584611332ecd86cf6d3f19841df257cd4531a8596bf53fc68 pid=1623 runtime=io.containerd.runc.v2 Jul 15 11:06:52.662615 env[1221]: time="2025-07-15T11:06:52.662466645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:06:52.662615 env[1221]: time="2025-07-15T11:06:52.662505685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:06:52.662615 env[1221]: time="2025-07-15T11:06:52.662524445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:06:52.662781 env[1221]: time="2025-07-15T11:06:52.662687165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1673b291d78aaff046e5612f9bbc5b7adce79adbc256b768e60eb41473e8735d pid=1626 runtime=io.containerd.runc.v2 Jul 15 11:06:52.674036 systemd[1]: Started cri-containerd-1673b291d78aaff046e5612f9bbc5b7adce79adbc256b768e60eb41473e8735d.scope. Jul 15 11:06:52.679907 systemd[1]: Started cri-containerd-7d03f8b4285ddd5ebf938e8d77c3d75271e02c2b24206512b76500aee3423d0e.scope. Jul 15 11:06:52.682029 systemd[1]: Started cri-containerd-6be525cf45b4ea6584611332ecd86cf6d3f19841df257cd4531a8596bf53fc68.scope. Jul 15 11:06:52.743700 env[1221]: time="2025-07-15T11:06:52.743639205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1673b291d78aaff046e5612f9bbc5b7adce79adbc256b768e60eb41473e8735d\"" Jul 15 11:06:52.744968 kubelet[1569]: E0715 11:06:52.744747 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:52.748488 env[1221]: time="2025-07-15T11:06:52.748423325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b890336b4d95055aa6c46f425648bc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d03f8b4285ddd5ebf938e8d77c3d75271e02c2b24206512b76500aee3423d0e\"" Jul 15 11:06:52.748850 env[1221]: time="2025-07-15T11:06:52.748806845Z" level=info msg="CreateContainer within sandbox \"1673b291d78aaff046e5612f9bbc5b7adce79adbc256b768e60eb41473e8735d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:06:52.749434 kubelet[1569]: E0715 11:06:52.749281 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:52.752232 env[1221]: time="2025-07-15T11:06:52.752200725Z" level=info msg="CreateContainer within sandbox \"7d03f8b4285ddd5ebf938e8d77c3d75271e02c2b24206512b76500aee3423d0e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:06:52.760114 env[1221]: time="2025-07-15T11:06:52.760079285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6be525cf45b4ea6584611332ecd86cf6d3f19841df257cd4531a8596bf53fc68\"" Jul 15 11:06:52.760807 kubelet[1569]: E0715 11:06:52.760780 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:52.762664 env[1221]: time="2025-07-15T11:06:52.762625285Z" level=info msg="CreateContainer within sandbox \"6be525cf45b4ea6584611332ecd86cf6d3f19841df257cd4531a8596bf53fc68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:06:52.765794 env[1221]: time="2025-07-15T11:06:52.765758485Z" level=info msg="CreateContainer within sandbox \"7d03f8b4285ddd5ebf938e8d77c3d75271e02c2b24206512b76500aee3423d0e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d96ca937201562460be788918617a8ad77e0b15994b42cfb372c2f214625e69f\"" Jul 15 11:06:52.766768 env[1221]: time="2025-07-15T11:06:52.766735605Z" level=info msg="StartContainer for \"d96ca937201562460be788918617a8ad77e0b15994b42cfb372c2f214625e69f\"" Jul 15 11:06:52.769151 env[1221]: time="2025-07-15T11:06:52.769094485Z" level=info msg="CreateContainer within sandbox \"1673b291d78aaff046e5612f9bbc5b7adce79adbc256b768e60eb41473e8735d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8bc672f631a31c856b7b32828ed1521df29549fc92dd939865c7a1d42680445f\"" Jul 15 11:06:52.769783 env[1221]: time="2025-07-15T11:06:52.769747125Z" level=info msg="StartContainer for \"8bc672f631a31c856b7b32828ed1521df29549fc92dd939865c7a1d42680445f\"" Jul 15 11:06:52.776071 env[1221]: time="2025-07-15T11:06:52.776014125Z" level=info msg="CreateContainer within sandbox \"6be525cf45b4ea6584611332ecd86cf6d3f19841df257cd4531a8596bf53fc68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"206799c2932562671b3bd9d704f1fe04adf9e6b9870c45a70fbf3af69c376186\"" Jul 15 11:06:52.776595 env[1221]: time="2025-07-15T11:06:52.776561005Z" level=info msg="StartContainer for \"206799c2932562671b3bd9d704f1fe04adf9e6b9870c45a70fbf3af69c376186\"" Jul 15 11:06:52.785796 systemd[1]: Started cri-containerd-8bc672f631a31c856b7b32828ed1521df29549fc92dd939865c7a1d42680445f.scope. Jul 15 11:06:52.796368 systemd[1]: Started cri-containerd-d96ca937201562460be788918617a8ad77e0b15994b42cfb372c2f214625e69f.scope. Jul 15 11:06:52.814825 systemd[1]: Started cri-containerd-206799c2932562671b3bd9d704f1fe04adf9e6b9870c45a70fbf3af69c376186.scope. Jul 15 11:06:52.830550 kubelet[1569]: E0715 11:06:52.830078 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Jul 15 11:06:52.848163 env[1221]: time="2025-07-15T11:06:52.848119365Z" level=info msg="StartContainer for \"8bc672f631a31c856b7b32828ed1521df29549fc92dd939865c7a1d42680445f\" returns successfully" Jul 15 11:06:52.866410 env[1221]: time="2025-07-15T11:06:52.866367165Z" level=info msg="StartContainer for \"206799c2932562671b3bd9d704f1fe04adf9e6b9870c45a70fbf3af69c376186\" returns successfully" Jul 15 11:06:52.895248 env[1221]: time="2025-07-15T11:06:52.891660525Z" level=info msg="StartContainer for \"d96ca937201562460be788918617a8ad77e0b15994b42cfb372c2f214625e69f\" returns successfully" Jul 15 11:06:53.060825 kubelet[1569]: I0715 11:06:53.060773 1569 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:06:53.449430 kubelet[1569]: E0715 11:06:53.449399 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:53.449554 kubelet[1569]: E0715 11:06:53.449541 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:53.451497 kubelet[1569]: E0715 11:06:53.451453 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:53.451614 kubelet[1569]: E0715 11:06:53.451594 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:53.452866 kubelet[1569]: E0715 11:06:53.452844 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:53.453090 kubelet[1569]: E0715 11:06:53.453076 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:54.454585 kubelet[1569]: E0715 11:06:54.454548 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:54.454889 kubelet[1569]: E0715 11:06:54.454670 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:54.454918 kubelet[1569]: E0715 11:06:54.454904 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:54.455013 kubelet[1569]: E0715 11:06:54.454986 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:54.455114 kubelet[1569]: E0715 11:06:54.455097 1569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:06:54.455288 kubelet[1569]: E0715 11:06:54.455271 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:54.737091 kubelet[1569]: E0715 11:06:54.736991 1569 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:06:54.819750 kubelet[1569]: I0715 11:06:54.819711 1569 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:06:54.819750 kubelet[1569]: E0715 11:06:54.819750 1569 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 11:06:54.828579 kubelet[1569]: E0715 11:06:54.828548 1569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:54.929033 kubelet[1569]: E0715 11:06:54.929000 1569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:55.030254 kubelet[1569]: E0715 11:06:55.030144 1569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:55.130366 kubelet[1569]: E0715 11:06:55.130306 1569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:55.231018 kubelet[1569]: E0715 11:06:55.230971 1569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:55.327086 kubelet[1569]: I0715 11:06:55.326978 1569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:55.335098 kubelet[1569]: E0715 11:06:55.335064 1569 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:55.335098 kubelet[1569]: I0715 11:06:55.335096 1569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:55.336829 kubelet[1569]: E0715 11:06:55.336803 1569 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:55.336909 kubelet[1569]: I0715 11:06:55.336898 1569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:06:55.338651 kubelet[1569]: E0715 11:06:55.338625 1569 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 11:06:55.387103 kubelet[1569]: I0715 11:06:55.387055 1569 apiserver.go:52] "Watching apiserver" Jul 15 11:06:55.425916 kubelet[1569]: I0715 11:06:55.425871 1569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 11:06:56.914742 systemd[1]: Reloading. Jul 15 11:06:56.963029 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-07-15T11:06:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:06:56.963061 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-07-15T11:06:56Z" level=info msg="torcx already run" Jul 15 11:06:57.019705 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:06:57.019726 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:06:57.035162 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:06:57.117338 systemd[1]: Stopping kubelet.service... Jul 15 11:06:57.139269 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:06:57.139489 systemd[1]: Stopped kubelet.service. Jul 15 11:06:57.139563 systemd[1]: kubelet.service: Consumed 1.350s CPU time. Jul 15 11:06:57.141697 systemd[1]: Starting kubelet.service... Jul 15 11:06:57.235733 systemd[1]: Started kubelet.service. Jul 15 11:06:57.270148 kubelet[1907]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:06:57.270464 kubelet[1907]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:06:57.270514 kubelet[1907]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:06:57.270739 kubelet[1907]: I0715 11:06:57.270708 1907 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:06:57.277512 kubelet[1907]: I0715 11:06:57.277477 1907 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 11:06:57.277512 kubelet[1907]: I0715 11:06:57.277505 1907 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:06:57.277790 kubelet[1907]: I0715 11:06:57.277760 1907 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 11:06:57.278991 kubelet[1907]: I0715 11:06:57.278970 1907 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 11:06:57.281194 kubelet[1907]: I0715 11:06:57.281169 1907 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:06:57.284484 kubelet[1907]: E0715 11:06:57.284436 1907 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:06:57.284484 kubelet[1907]: I0715 11:06:57.284482 1907 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:06:57.287322 kubelet[1907]: I0715 11:06:57.287295 1907 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:06:57.287634 kubelet[1907]: I0715 11:06:57.287607 1907 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:06:57.287925 kubelet[1907]: I0715 11:06:57.287694 1907 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:06:57.288064 kubelet[1907]: I0715 11:06:57.288049 1907 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:06:57.288127 kubelet[1907]: I0715 11:06:57.288118 1907 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 11:06:57.288227 kubelet[1907]: I0715 11:06:57.288215 1907 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:06:57.288404 kubelet[1907]: I0715 11:06:57.288389 1907 kubelet.go:446] "Attempting to sync node with API server" Jul 15 11:06:57.288496 kubelet[1907]: I0715 11:06:57.288483 1907 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:06:57.288589 kubelet[1907]: I0715 11:06:57.288579 1907 kubelet.go:352] "Adding apiserver pod source" Jul 15 11:06:57.288648 kubelet[1907]: I0715 11:06:57.288637 1907 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:06:57.289786 kubelet[1907]: I0715 11:06:57.289756 1907 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:06:57.293539 kubelet[1907]: I0715 11:06:57.290239 1907 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:06:57.293539 kubelet[1907]: I0715 11:06:57.290876 1907 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:06:57.293539 kubelet[1907]: I0715 11:06:57.290905 1907 server.go:1287] "Started kubelet" Jul 15 11:06:57.293539 kubelet[1907]: I0715 11:06:57.291328 1907 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:06:57.293539 kubelet[1907]: I0715 11:06:57.291695 1907 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:06:57.293539 kubelet[1907]: I0715 11:06:57.291922 1907 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:06:57.293539 kubelet[1907]: I0715 11:06:57.293440 1907 server.go:479] "Adding debug handlers to kubelet server" Jul 15 11:06:57.293972 kubelet[1907]: I0715 11:06:57.293780 1907 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:06:57.295653 kubelet[1907]: I0715 11:06:57.295148 1907 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:06:57.297358 kubelet[1907]: I0715 11:06:57.297328 1907 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:06:57.297532 kubelet[1907]: E0715 11:06:57.297492 1907 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:06:57.298479 kubelet[1907]: I0715 11:06:57.297798 1907 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:06:57.298576 kubelet[1907]: I0715 11:06:57.297907 1907 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:06:57.299850 kubelet[1907]: E0715 11:06:57.299830 1907 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:06:57.299991 kubelet[1907]: I0715 11:06:57.299932 1907 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:06:57.300209 kubelet[1907]: I0715 11:06:57.300058 1907 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:06:57.300780 kubelet[1907]: I0715 11:06:57.300761 1907 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:06:57.331193 kubelet[1907]: I0715 11:06:57.331110 1907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:06:57.332535 kubelet[1907]: I0715 11:06:57.332474 1907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:06:57.332598 kubelet[1907]: I0715 11:06:57.332571 1907 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 11:06:57.332598 kubelet[1907]: I0715 11:06:57.332591 1907 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:06:57.332598 kubelet[1907]: I0715 11:06:57.332598 1907 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 11:06:57.332687 kubelet[1907]: E0715 11:06:57.332645 1907 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:06:57.354033 kubelet[1907]: I0715 11:06:57.354008 1907 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:06:57.354179 kubelet[1907]: I0715 11:06:57.354163 1907 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:06:57.354242 kubelet[1907]: I0715 11:06:57.354233 1907 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:06:57.354431 kubelet[1907]: I0715 11:06:57.354414 1907 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:06:57.354545 kubelet[1907]: I0715 11:06:57.354500 1907 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:06:57.354617 kubelet[1907]: I0715 11:06:57.354607 1907 policy_none.go:49] "None policy: Start" Jul 15 11:06:57.354674 kubelet[1907]: I0715 11:06:57.354664 1907 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:06:57.354729 kubelet[1907]: I0715 11:06:57.354721 1907 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:06:57.354890 kubelet[1907]: I0715 11:06:57.354876 1907 state_mem.go:75] "Updated machine memory state" Jul 15 11:06:57.358254 kubelet[1907]: I0715 11:06:57.358229 1907 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:06:57.358829 kubelet[1907]: I0715 11:06:57.358806 1907 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:06:57.359574 kubelet[1907]: I0715 11:06:57.358828 1907 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:06:57.359647 kubelet[1907]: E0715 11:06:57.359590 1907 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:06:57.359767 kubelet[1907]: I0715 11:06:57.359750 1907 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:06:57.433406 kubelet[1907]: I0715 11:06:57.433369 1907 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:06:57.433406 kubelet[1907]: I0715 11:06:57.433400 1907 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:57.433673 kubelet[1907]: I0715 11:06:57.433657 1907 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:57.463156 kubelet[1907]: I0715 11:06:57.463126 1907 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:06:57.469079 kubelet[1907]: I0715 11:06:57.469044 1907 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 11:06:57.469198 kubelet[1907]: I0715 11:06:57.469118 1907 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:06:57.600534 kubelet[1907]: I0715 11:06:57.600409 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:57.600534 kubelet[1907]: I0715 11:06:57.600482 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:57.601435 kubelet[1907]: I0715 11:06:57.601293 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b890336b4d95055aa6c46f425648bc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b890336b4d95055aa6c46f425648bc4\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:57.601988 kubelet[1907]: I0715 11:06:57.601965 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b890336b4d95055aa6c46f425648bc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b890336b4d95055aa6c46f425648bc4\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:57.602256 kubelet[1907]: I0715 11:06:57.602235 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b890336b4d95055aa6c46f425648bc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b890336b4d95055aa6c46f425648bc4\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:06:57.602340 kubelet[1907]: I0715 11:06:57.602328 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:57.602419 kubelet[1907]: I0715 11:06:57.602406 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:57.602500 kubelet[1907]: I0715 11:06:57.602487 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:06:57.602655 kubelet[1907]: I0715 11:06:57.602638 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:57.738607 kubelet[1907]: E0715 11:06:57.738563 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:57.738883 kubelet[1907]: E0715 11:06:57.738854 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:57.739652 kubelet[1907]: E0715 11:06:57.739624 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:57.983242 sudo[1942]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 11:06:57.983473 sudo[1942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 15 11:06:58.289216 kubelet[1907]: I0715 11:06:58.289117 1907 apiserver.go:52] "Watching apiserver" Jul 15 11:06:58.299672 kubelet[1907]: I0715 11:06:58.299631 1907 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 11:06:58.342217 kubelet[1907]: I0715 11:06:58.342190 1907 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:06:58.342410 kubelet[1907]: I0715 11:06:58.342395 1907 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:58.342551 kubelet[1907]: E0715 11:06:58.342510 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:58.347109 kubelet[1907]: E0715 11:06:58.347077 1907 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 11:06:58.347224 kubelet[1907]: E0715 11:06:58.347195 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:58.349462 kubelet[1907]: E0715 11:06:58.349432 1907 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:06:58.349621 kubelet[1907]: E0715 11:06:58.349590 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:58.369693 kubelet[1907]: I0715 11:06:58.369637 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3696219250000001 podStartE2EDuration="1.369621925s" podCreationTimestamp="2025-07-15 11:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:06:58.369168005 +0000 UTC m=+1.130120041" watchObservedRunningTime="2025-07-15 11:06:58.369621925 +0000 UTC m=+1.130573921" Jul 15 11:06:58.369808 kubelet[1907]: I0715 11:06:58.369759 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.369753925 podStartE2EDuration="1.369753925s" podCreationTimestamp="2025-07-15 11:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:06:58.360014165 +0000 UTC m=+1.120966161" watchObservedRunningTime="2025-07-15 11:06:58.369753925 +0000 UTC m=+1.130705921" Jul 15 11:06:58.385900 kubelet[1907]: I0715 11:06:58.385287 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.385264205 podStartE2EDuration="1.385264205s" podCreationTimestamp="2025-07-15 11:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:06:58.385122605 +0000 UTC m=+1.146074681" watchObservedRunningTime="2025-07-15 11:06:58.385264205 +0000 UTC m=+1.146216201" Jul 15 11:06:58.456287 sudo[1942]: pam_unix(sudo:session): session closed for user root Jul 15 11:06:59.343701 kubelet[1907]: E0715 11:06:59.343669 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:59.343996 kubelet[1907]: E0715 11:06:59.343785 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:06:59.344103 kubelet[1907]: E0715 11:06:59.344083 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:00.138893 sudo[1323]: pam_unix(sudo:session): session closed for user root Jul 15 11:07:00.140210 sshd[1318]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:00.142960 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:57604.service: Deactivated successfully. Jul 15 11:07:00.143699 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:07:00.143848 systemd[1]: session-5.scope: Consumed 7.230s CPU time. Jul 15 11:07:00.144249 systemd-logind[1213]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:07:00.144944 systemd-logind[1213]: Removed session 5. Jul 15 11:07:00.344765 kubelet[1907]: E0715 11:07:00.344737 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:02.283730 kubelet[1907]: E0715 11:07:02.283662 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:02.347505 kubelet[1907]: E0715 11:07:02.347222 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:03.090283 kubelet[1907]: I0715 11:07:03.090238 1907 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:07:03.090691 env[1221]: time="2025-07-15T11:07:03.090645955Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:07:03.091060 kubelet[1907]: I0715 11:07:03.091010 1907 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:07:03.348484 kubelet[1907]: E0715 11:07:03.348385 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:03.979981 systemd[1]: Created slice kubepods-besteffort-pod26beb4d8_e9f3_4d42_8b6d_9faa4660b3bf.slice. Jul 15 11:07:03.990922 systemd[1]: Created slice kubepods-burstable-poda2494d49_63f1_49e7_b31c_3e574bb849b9.slice. Jul 15 11:07:04.047540 kubelet[1907]: I0715 11:07:04.047434 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf-xtables-lock\") pod \"kube-proxy-r8hrs\" (UID: \"26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf\") " pod="kube-system/kube-proxy-r8hrs" Jul 15 11:07:04.047540 kubelet[1907]: I0715 11:07:04.047515 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cni-path\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047540 kubelet[1907]: I0715 11:07:04.047547 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-lib-modules\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047763 kubelet[1907]: I0715 11:07:04.047563 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf-kube-proxy\") pod \"kube-proxy-r8hrs\" (UID: \"26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf\") " pod="kube-system/kube-proxy-r8hrs" Jul 15 11:07:04.047763 kubelet[1907]: I0715 11:07:04.047582 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-run\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047763 kubelet[1907]: I0715 11:07:04.047607 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-hostproc\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047763 kubelet[1907]: I0715 11:07:04.047625 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-net\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047763 kubelet[1907]: I0715 11:07:04.047642 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-kernel\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047763 kubelet[1907]: I0715 11:07:04.047662 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-bpf-maps\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047927 kubelet[1907]: I0715 11:07:04.047684 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-hubble-tls\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047927 kubelet[1907]: I0715 11:07:04.047702 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s5kx\" (UniqueName: \"kubernetes.io/projected/26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf-kube-api-access-5s5kx\") pod \"kube-proxy-r8hrs\" (UID: \"26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf\") " pod="kube-system/kube-proxy-r8hrs" Jul 15 11:07:04.047927 kubelet[1907]: I0715 11:07:04.047716 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-etc-cni-netd\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047927 kubelet[1907]: I0715 11:07:04.047730 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-xtables-lock\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.047927 kubelet[1907]: I0715 11:07:04.047744 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-config-path\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.048031 kubelet[1907]: I0715 11:07:04.047770 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf-lib-modules\") pod \"kube-proxy-r8hrs\" (UID: \"26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf\") " pod="kube-system/kube-proxy-r8hrs" Jul 15 11:07:04.048031 kubelet[1907]: I0715 11:07:04.047787 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-cgroup\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.048031 kubelet[1907]: I0715 11:07:04.047804 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2494d49-63f1-49e7-b31c-3e574bb849b9-clustermesh-secrets\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.048031 kubelet[1907]: I0715 11:07:04.047820 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98j82\" (UniqueName: \"kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-kube-api-access-98j82\") pod \"cilium-wdczg\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " pod="kube-system/cilium-wdczg" Jul 15 11:07:04.138218 systemd[1]: Created slice kubepods-besteffort-podc80ef793_f5c1_4803_a7e3_9c8fe3e66ebc.slice. Jul 15 11:07:04.148554 kubelet[1907]: I0715 11:07:04.148504 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vsh\" (UniqueName: \"kubernetes.io/projected/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-kube-api-access-r9vsh\") pod \"cilium-operator-6c4d7847fc-6zt57\" (UID: \"c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc\") " pod="kube-system/cilium-operator-6c4d7847fc-6zt57" Jul 15 11:07:04.148677 kubelet[1907]: I0715 11:07:04.148640 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6zt57\" (UID: \"c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc\") " pod="kube-system/cilium-operator-6c4d7847fc-6zt57" Jul 15 11:07:04.148970 kubelet[1907]: I0715 11:07:04.148942 1907 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:07:04.288281 kubelet[1907]: E0715 11:07:04.288195 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:04.289116 env[1221]: time="2025-07-15T11:07:04.288954392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8hrs,Uid:26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf,Namespace:kube-system,Attempt:0,}" Jul 15 11:07:04.293293 kubelet[1907]: E0715 11:07:04.293265 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:04.293760 env[1221]: time="2025-07-15T11:07:04.293726118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdczg,Uid:a2494d49-63f1-49e7-b31c-3e574bb849b9,Namespace:kube-system,Attempt:0,}" Jul 15 11:07:04.306739 env[1221]: time="2025-07-15T11:07:04.306649372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:07:04.306739 env[1221]: time="2025-07-15T11:07:04.306688452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:07:04.306894 env[1221]: time="2025-07-15T11:07:04.306698933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:07:04.307184 env[1221]: time="2025-07-15T11:07:04.307133133Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b2857032fca3a63dd1f120aca041c5cbb4842cadb0c8165d6dbe404b90bb734 pid=2000 runtime=io.containerd.runc.v2 Jul 15 11:07:04.309733 env[1221]: time="2025-07-15T11:07:04.309665656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:07:04.309733 env[1221]: time="2025-07-15T11:07:04.309699736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:07:04.309733 env[1221]: time="2025-07-15T11:07:04.309709856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:07:04.309886 env[1221]: time="2025-07-15T11:07:04.309852936Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47 pid=2016 runtime=io.containerd.runc.v2 Jul 15 11:07:04.318924 systemd[1]: Started cri-containerd-1b2857032fca3a63dd1f120aca041c5cbb4842cadb0c8165d6dbe404b90bb734.scope. Jul 15 11:07:04.323329 systemd[1]: Started cri-containerd-84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47.scope. Jul 15 11:07:04.359048 env[1221]: time="2025-07-15T11:07:04.358996592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8hrs,Uid:26beb4d8-e9f3-4d42-8b6d-9faa4660b3bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b2857032fca3a63dd1f120aca041c5cbb4842cadb0c8165d6dbe404b90bb734\"" Jul 15 11:07:04.359691 kubelet[1907]: E0715 11:07:04.359669 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:04.361983 env[1221]: time="2025-07-15T11:07:04.361950076Z" level=info msg="CreateContainer within sandbox \"1b2857032fca3a63dd1f120aca041c5cbb4842cadb0c8165d6dbe404b90bb734\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:07:04.366341 env[1221]: time="2025-07-15T11:07:04.366310281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdczg,Uid:a2494d49-63f1-49e7-b31c-3e574bb849b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\"" Jul 15 11:07:04.366918 kubelet[1907]: E0715 11:07:04.366899 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:04.368208 env[1221]: time="2025-07-15T11:07:04.368179763Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 11:07:04.375628 env[1221]: time="2025-07-15T11:07:04.375583491Z" level=info msg="CreateContainer within sandbox \"1b2857032fca3a63dd1f120aca041c5cbb4842cadb0c8165d6dbe404b90bb734\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d87b94ff5046601d8f34646769c0dd0ee49337a30826c7d48f0050117823dc4\"" Jul 15 11:07:04.377074 env[1221]: time="2025-07-15T11:07:04.377035853Z" level=info msg="StartContainer for \"0d87b94ff5046601d8f34646769c0dd0ee49337a30826c7d48f0050117823dc4\"" Jul 15 11:07:04.400082 systemd[1]: Started cri-containerd-0d87b94ff5046601d8f34646769c0dd0ee49337a30826c7d48f0050117823dc4.scope. Jul 15 11:07:04.435014 env[1221]: time="2025-07-15T11:07:04.434971919Z" level=info msg="StartContainer for \"0d87b94ff5046601d8f34646769c0dd0ee49337a30826c7d48f0050117823dc4\" returns successfully" Jul 15 11:07:04.441053 kubelet[1907]: E0715 11:07:04.441018 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:04.442309 env[1221]: time="2025-07-15T11:07:04.441583487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6zt57,Uid:c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc,Namespace:kube-system,Attempt:0,}" Jul 15 11:07:04.465088 env[1221]: time="2025-07-15T11:07:04.465003033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:07:04.465088 env[1221]: time="2025-07-15T11:07:04.465062633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:07:04.465088 env[1221]: time="2025-07-15T11:07:04.465073713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:07:04.465358 env[1221]: time="2025-07-15T11:07:04.465311034Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941 pid=2116 runtime=io.containerd.runc.v2 Jul 15 11:07:04.480489 systemd[1]: Started cri-containerd-532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941.scope. Jul 15 11:07:04.523545 env[1221]: time="2025-07-15T11:07:04.522057658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6zt57,Uid:c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc,Namespace:kube-system,Attempt:0,} returns sandbox id \"532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941\"" Jul 15 11:07:04.523776 kubelet[1907]: E0715 11:07:04.522617 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:05.356333 kubelet[1907]: E0715 11:07:05.356284 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:05.366304 kubelet[1907]: I0715 11:07:05.366046 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r8hrs" podStartSLOduration=2.366031517 podStartE2EDuration="2.366031517s" podCreationTimestamp="2025-07-15 11:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:07:05.365970316 +0000 UTC m=+8.126922312" watchObservedRunningTime="2025-07-15 11:07:05.366031517 +0000 UTC m=+8.126983513" Jul 15 11:07:07.812653 kubelet[1907]: E0715 11:07:07.812619 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:08.298760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839143474.mount: Deactivated successfully. Jul 15 11:07:10.000882 kubelet[1907]: E0715 11:07:10.000844 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:10.407641 kubelet[1907]: E0715 11:07:10.407396 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:10.649432 env[1221]: time="2025-07-15T11:07:10.649376272Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:07:10.651131 env[1221]: time="2025-07-15T11:07:10.651096754Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:07:10.653125 env[1221]: time="2025-07-15T11:07:10.653097275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:07:10.653765 env[1221]: time="2025-07-15T11:07:10.653735116Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 15 11:07:10.655523 env[1221]: time="2025-07-15T11:07:10.655453557Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 11:07:10.656510 env[1221]: time="2025-07-15T11:07:10.656483478Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:07:10.671245 env[1221]: time="2025-07-15T11:07:10.670960049Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\"" Jul 15 11:07:10.671811 env[1221]: time="2025-07-15T11:07:10.671786930Z" level=info msg="StartContainer for \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\"" Jul 15 11:07:10.700776 systemd[1]: Started cri-containerd-972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8.scope. Jul 15 11:07:10.799004 env[1221]: time="2025-07-15T11:07:10.798932028Z" level=info msg="StartContainer for \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\" returns successfully" Jul 15 11:07:10.818149 systemd[1]: cri-containerd-972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8.scope: Deactivated successfully. Jul 15 11:07:10.858986 env[1221]: time="2025-07-15T11:07:10.858940555Z" level=info msg="shim disconnected" id=972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8 Jul 15 11:07:10.858986 env[1221]: time="2025-07-15T11:07:10.858985755Z" level=warning msg="cleaning up after shim disconnected" id=972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8 namespace=k8s.io Jul 15 11:07:10.858986 env[1221]: time="2025-07-15T11:07:10.858994715Z" level=info msg="cleaning up dead shim" Jul 15 11:07:10.866028 env[1221]: time="2025-07-15T11:07:10.865990600Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:07:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2330 runtime=io.containerd.runc.v2\n" Jul 15 11:07:11.412948 kubelet[1907]: E0715 11:07:11.412909 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:11.417439 env[1221]: time="2025-07-15T11:07:11.417392328Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:07:11.428668 env[1221]: time="2025-07-15T11:07:11.428599336Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\"" Jul 15 11:07:11.429303 env[1221]: time="2025-07-15T11:07:11.429274456Z" level=info msg="StartContainer for \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\"" Jul 15 11:07:11.447654 systemd[1]: Started cri-containerd-4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f.scope. Jul 15 11:07:11.500473 env[1221]: time="2025-07-15T11:07:11.500397948Z" level=info msg="StartContainer for \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\" returns successfully" Jul 15 11:07:11.511955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:07:11.512200 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:07:11.512437 systemd[1]: Stopping systemd-sysctl.service... Jul 15 11:07:11.514008 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:07:11.515065 systemd[1]: cri-containerd-4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f.scope: Deactivated successfully. Jul 15 11:07:11.525561 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:07:11.535125 env[1221]: time="2025-07-15T11:07:11.535055573Z" level=info msg="shim disconnected" id=4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f Jul 15 11:07:11.535125 env[1221]: time="2025-07-15T11:07:11.535114453Z" level=warning msg="cleaning up after shim disconnected" id=4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f namespace=k8s.io Jul 15 11:07:11.535125 env[1221]: time="2025-07-15T11:07:11.535126653Z" level=info msg="cleaning up dead shim" Jul 15 11:07:11.542741 env[1221]: time="2025-07-15T11:07:11.542685339Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:07:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2393 runtime=io.containerd.runc.v2\n" Jul 15 11:07:11.667985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8-rootfs.mount: Deactivated successfully. Jul 15 11:07:12.156605 env[1221]: time="2025-07-15T11:07:12.156564058Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:07:12.157712 env[1221]: time="2025-07-15T11:07:12.157687259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:07:12.159210 env[1221]: time="2025-07-15T11:07:12.159180860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:07:12.159882 env[1221]: time="2025-07-15T11:07:12.159852620Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 15 11:07:12.163443 env[1221]: time="2025-07-15T11:07:12.162975222Z" level=info msg="CreateContainer within sandbox \"532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 11:07:12.174952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221664756.mount: Deactivated successfully. Jul 15 11:07:12.179464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083416287.mount: Deactivated successfully. Jul 15 11:07:12.183919 env[1221]: time="2025-07-15T11:07:12.183880836Z" level=info msg="CreateContainer within sandbox \"532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\"" Jul 15 11:07:12.185202 env[1221]: time="2025-07-15T11:07:12.184504077Z" level=info msg="StartContainer for \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\"" Jul 15 11:07:12.203470 systemd[1]: Started cri-containerd-f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a.scope. Jul 15 11:07:12.298735 env[1221]: time="2025-07-15T11:07:12.298678195Z" level=info msg="StartContainer for \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\" returns successfully" Jul 15 11:07:12.416168 kubelet[1907]: E0715 11:07:12.416044 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:12.418608 kubelet[1907]: E0715 11:07:12.418512 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:12.420369 env[1221]: time="2025-07-15T11:07:12.420327837Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:07:12.435963 kubelet[1907]: I0715 11:07:12.435900 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6zt57" podStartSLOduration=0.798413767 podStartE2EDuration="8.435883768s" podCreationTimestamp="2025-07-15 11:07:04 +0000 UTC" firstStartedPulling="2025-07-15 11:07:04.52333974 +0000 UTC m=+7.284291736" lastFinishedPulling="2025-07-15 11:07:12.160809741 +0000 UTC m=+14.921761737" observedRunningTime="2025-07-15 11:07:12.434877487 +0000 UTC m=+15.195829483" watchObservedRunningTime="2025-07-15 11:07:12.435883768 +0000 UTC m=+15.196835764" Jul 15 11:07:12.439963 env[1221]: time="2025-07-15T11:07:12.439913411Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\"" Jul 15 11:07:12.440637 env[1221]: time="2025-07-15T11:07:12.440609411Z" level=info msg="StartContainer for \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\"" Jul 15 11:07:12.474908 systemd[1]: Started cri-containerd-3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce.scope. Jul 15 11:07:12.529558 env[1221]: time="2025-07-15T11:07:12.529501632Z" level=info msg="StartContainer for \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\" returns successfully" Jul 15 11:07:12.533158 systemd[1]: cri-containerd-3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce.scope: Deactivated successfully. Jul 15 11:07:12.558226 env[1221]: time="2025-07-15T11:07:12.558179571Z" level=info msg="shim disconnected" id=3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce Jul 15 11:07:12.558226 env[1221]: time="2025-07-15T11:07:12.558224771Z" level=warning msg="cleaning up after shim disconnected" id=3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce namespace=k8s.io Jul 15 11:07:12.558434 env[1221]: time="2025-07-15T11:07:12.558235251Z" level=info msg="cleaning up dead shim" Jul 15 11:07:12.565547 env[1221]: time="2025-07-15T11:07:12.565490456Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:07:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2493 runtime=io.containerd.runc.v2\n" Jul 15 11:07:13.422245 kubelet[1907]: E0715 11:07:13.422028 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:13.422245 kubelet[1907]: E0715 11:07:13.422095 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:13.424059 env[1221]: time="2025-07-15T11:07:13.424018823Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:07:13.441137 env[1221]: time="2025-07-15T11:07:13.441080834Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\"" Jul 15 11:07:13.441623 env[1221]: time="2025-07-15T11:07:13.441589395Z" level=info msg="StartContainer for \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\"" Jul 15 11:07:13.456739 systemd[1]: Started cri-containerd-b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775.scope. Jul 15 11:07:13.499840 env[1221]: time="2025-07-15T11:07:13.499781792Z" level=info msg="StartContainer for \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\" returns successfully" Jul 15 11:07:13.499998 systemd[1]: cri-containerd-b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775.scope: Deactivated successfully. Jul 15 11:07:13.526290 env[1221]: time="2025-07-15T11:07:13.525282608Z" level=info msg="shim disconnected" id=b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775 Jul 15 11:07:13.526290 env[1221]: time="2025-07-15T11:07:13.526285649Z" level=warning msg="cleaning up after shim disconnected" id=b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775 namespace=k8s.io Jul 15 11:07:13.526290 env[1221]: time="2025-07-15T11:07:13.526296969Z" level=info msg="cleaning up dead shim" Jul 15 11:07:13.534113 env[1221]: time="2025-07-15T11:07:13.534009814Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:07:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2548 runtime=io.containerd.runc.v2\n" Jul 15 11:07:13.543170 update_engine[1215]: I0715 11:07:13.542805 1215 update_attempter.cc:509] Updating boot flags... Jul 15 11:07:13.667914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775-rootfs.mount: Deactivated successfully. Jul 15 11:07:14.426787 kubelet[1907]: E0715 11:07:14.426748 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:14.428474 env[1221]: time="2025-07-15T11:07:14.428436248Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:07:14.442301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932614379.mount: Deactivated successfully. Jul 15 11:07:14.447109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448874282.mount: Deactivated successfully. Jul 15 11:07:14.451612 env[1221]: time="2025-07-15T11:07:14.451499742Z" level=info msg="CreateContainer within sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\"" Jul 15 11:07:14.453047 env[1221]: time="2025-07-15T11:07:14.452139462Z" level=info msg="StartContainer for \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\"" Jul 15 11:07:14.467818 systemd[1]: Started cri-containerd-fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830.scope. Jul 15 11:07:14.516878 env[1221]: time="2025-07-15T11:07:14.516826741Z" level=info msg="StartContainer for \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\" returns successfully" Jul 15 11:07:14.656580 kubelet[1907]: I0715 11:07:14.655930 1907 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 11:07:14.708980 systemd[1]: Created slice kubepods-burstable-pod1529bf13_623b_4250_92ee_09706963ece4.slice. Jul 15 11:07:14.714425 systemd[1]: Created slice kubepods-burstable-podcc1ee6a0_14ed_4c6f_a747_5d4cc6473456.slice. Jul 15 11:07:14.726745 kubelet[1907]: I0715 11:07:14.726711 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc1ee6a0-14ed-4c6f-a747-5d4cc6473456-config-volume\") pod \"coredns-668d6bf9bc-bwd6t\" (UID: \"cc1ee6a0-14ed-4c6f-a747-5d4cc6473456\") " pod="kube-system/coredns-668d6bf9bc-bwd6t" Jul 15 11:07:14.726836 kubelet[1907]: I0715 11:07:14.726750 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprfq\" (UniqueName: \"kubernetes.io/projected/cc1ee6a0-14ed-4c6f-a747-5d4cc6473456-kube-api-access-lprfq\") pod \"coredns-668d6bf9bc-bwd6t\" (UID: \"cc1ee6a0-14ed-4c6f-a747-5d4cc6473456\") " pod="kube-system/coredns-668d6bf9bc-bwd6t" Jul 15 11:07:14.726836 kubelet[1907]: I0715 11:07:14.726778 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1529bf13-623b-4250-92ee-09706963ece4-config-volume\") pod \"coredns-668d6bf9bc-92pss\" (UID: \"1529bf13-623b-4250-92ee-09706963ece4\") " pod="kube-system/coredns-668d6bf9bc-92pss" Jul 15 11:07:14.726836 kubelet[1907]: I0715 11:07:14.726797 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55dhc\" (UniqueName: \"kubernetes.io/projected/1529bf13-623b-4250-92ee-09706963ece4-kube-api-access-55dhc\") pod \"coredns-668d6bf9bc-92pss\" (UID: \"1529bf13-623b-4250-92ee-09706963ece4\") " pod="kube-system/coredns-668d6bf9bc-92pss" Jul 15 11:07:14.793539 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 15 11:07:15.011771 kubelet[1907]: E0715 11:07:15.011684 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:15.012449 env[1221]: time="2025-07-15T11:07:15.012267997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-92pss,Uid:1529bf13-623b-4250-92ee-09706963ece4,Namespace:kube-system,Attempt:0,}" Jul 15 11:07:15.020778 kubelet[1907]: E0715 11:07:15.020738 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:15.021269 env[1221]: time="2025-07-15T11:07:15.021230762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bwd6t,Uid:cc1ee6a0-14ed-4c6f-a747-5d4cc6473456,Namespace:kube-system,Attempt:0,}" Jul 15 11:07:15.036546 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 15 11:07:15.430013 kubelet[1907]: E0715 11:07:15.429915 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:16.431991 kubelet[1907]: E0715 11:07:16.431962 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:16.682936 systemd-networkd[1049]: cilium_host: Link UP Jul 15 11:07:16.683057 systemd-networkd[1049]: cilium_net: Link UP Jul 15 11:07:16.684200 systemd-networkd[1049]: cilium_net: Gained carrier Jul 15 11:07:16.685669 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 15 11:07:16.685741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 15 11:07:16.684955 systemd-networkd[1049]: cilium_host: Gained carrier Jul 15 11:07:16.685106 systemd-networkd[1049]: cilium_net: Gained IPv6LL Jul 15 11:07:16.685223 systemd-networkd[1049]: cilium_host: Gained IPv6LL Jul 15 11:07:16.778469 systemd-networkd[1049]: cilium_vxlan: Link UP Jul 15 11:07:16.778476 systemd-networkd[1049]: cilium_vxlan: Gained carrier Jul 15 11:07:17.088554 kernel: NET: Registered PF_ALG protocol family Jul 15 11:07:17.433321 kubelet[1907]: E0715 11:07:17.433046 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:17.684082 systemd-networkd[1049]: lxc_health: Link UP Jul 15 11:07:17.693650 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:07:17.692923 systemd-networkd[1049]: lxc_health: Gained carrier Jul 15 11:07:18.102197 systemd-networkd[1049]: lxc8975363bbad9: Link UP Jul 15 11:07:18.113612 kernel: eth0: renamed from tmpcbf7c Jul 15 11:07:18.128489 systemd-networkd[1049]: lxce9b333f97881: Link UP Jul 15 11:07:18.130171 systemd-networkd[1049]: lxc8975363bbad9: Gained carrier Jul 15 11:07:18.130614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8975363bbad9: link becomes ready Jul 15 11:07:18.136003 kernel: eth0: renamed from tmpbacb0 Jul 15 11:07:18.144247 systemd-networkd[1049]: lxce9b333f97881: Gained carrier Jul 15 11:07:18.144782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce9b333f97881: link becomes ready Jul 15 11:07:18.315259 kubelet[1907]: I0715 11:07:18.315166 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdczg" podStartSLOduration=9.027731002 podStartE2EDuration="15.315142597s" podCreationTimestamp="2025-07-15 11:07:03 +0000 UTC" firstStartedPulling="2025-07-15 11:07:04.367779602 +0000 UTC m=+7.128731598" lastFinishedPulling="2025-07-15 11:07:10.655191237 +0000 UTC m=+13.416143193" observedRunningTime="2025-07-15 11:07:15.446232041 +0000 UTC m=+18.207184037" watchObservedRunningTime="2025-07-15 11:07:18.315142597 +0000 UTC m=+21.076094593" Jul 15 11:07:18.435196 kubelet[1907]: E0715 11:07:18.435094 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:18.773642 systemd-networkd[1049]: lxc_health: Gained IPv6LL Jul 15 11:07:18.773907 systemd-networkd[1049]: cilium_vxlan: Gained IPv6LL Jul 15 11:07:19.221672 systemd-networkd[1049]: lxc8975363bbad9: Gained IPv6LL Jul 15 11:07:20.053659 systemd-networkd[1049]: lxce9b333f97881: Gained IPv6LL Jul 15 11:07:21.607585 env[1221]: time="2025-07-15T11:07:21.607367706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:07:21.607585 env[1221]: time="2025-07-15T11:07:21.607417706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:07:21.607585 env[1221]: time="2025-07-15T11:07:21.607428146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:07:21.608702 env[1221]: time="2025-07-15T11:07:21.608138346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbf7c19713d29cfc788e4f6e6bb521b0a1e3670099f62a0eb21a03ade5af085e pid=3134 runtime=io.containerd.runc.v2 Jul 15 11:07:21.611827 env[1221]: time="2025-07-15T11:07:21.611584548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:07:21.611827 env[1221]: time="2025-07-15T11:07:21.611630868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:07:21.611827 env[1221]: time="2025-07-15T11:07:21.611647668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:07:21.612145 env[1221]: time="2025-07-15T11:07:21.612093588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bacb0d0b1c639e91d9fcbe2a7577eb796494e769992dfba2691a0a839f059145 pid=3148 runtime=io.containerd.runc.v2 Jul 15 11:07:21.628718 systemd[1]: Started cri-containerd-cbf7c19713d29cfc788e4f6e6bb521b0a1e3670099f62a0eb21a03ade5af085e.scope. Jul 15 11:07:21.639971 systemd[1]: Started cri-containerd-bacb0d0b1c639e91d9fcbe2a7577eb796494e769992dfba2691a0a839f059145.scope. Jul 15 11:07:21.672215 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:07:21.673654 systemd-resolved[1159]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:07:21.692029 env[1221]: time="2025-07-15T11:07:21.691979178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-92pss,Uid:1529bf13-623b-4250-92ee-09706963ece4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf7c19713d29cfc788e4f6e6bb521b0a1e3670099f62a0eb21a03ade5af085e\"" Jul 15 11:07:21.692162 env[1221]: time="2025-07-15T11:07:21.692087298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bwd6t,Uid:cc1ee6a0-14ed-4c6f-a747-5d4cc6473456,Namespace:kube-system,Attempt:0,} returns sandbox id \"bacb0d0b1c639e91d9fcbe2a7577eb796494e769992dfba2691a0a839f059145\"" Jul 15 11:07:21.693354 kubelet[1907]: E0715 11:07:21.693326 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:21.693635 kubelet[1907]: E0715 11:07:21.693552 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:21.696660 env[1221]: time="2025-07-15T11:07:21.695722780Z" level=info msg="CreateContainer within sandbox \"bacb0d0b1c639e91d9fcbe2a7577eb796494e769992dfba2691a0a839f059145\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:07:21.696660 env[1221]: time="2025-07-15T11:07:21.696126860Z" level=info msg="CreateContainer within sandbox \"cbf7c19713d29cfc788e4f6e6bb521b0a1e3670099f62a0eb21a03ade5af085e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:07:21.768272 env[1221]: time="2025-07-15T11:07:21.768221167Z" level=info msg="CreateContainer within sandbox \"cbf7c19713d29cfc788e4f6e6bb521b0a1e3670099f62a0eb21a03ade5af085e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c21897ade3c5f5c962b927120e74e236e9d45312ef26b8922442873651cb345f\"" Jul 15 11:07:21.768899 env[1221]: time="2025-07-15T11:07:21.768836488Z" level=info msg="StartContainer for \"c21897ade3c5f5c962b927120e74e236e9d45312ef26b8922442873651cb345f\"" Jul 15 11:07:21.771849 env[1221]: time="2025-07-15T11:07:21.771814529Z" level=info msg="CreateContainer within sandbox \"bacb0d0b1c639e91d9fcbe2a7577eb796494e769992dfba2691a0a839f059145\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60980bc14355cc5c847cddf880475562a6173341378de1ca3c3e0df093d6d03c\"" Jul 15 11:07:21.772356 env[1221]: time="2025-07-15T11:07:21.772236889Z" level=info msg="StartContainer for \"60980bc14355cc5c847cddf880475562a6173341378de1ca3c3e0df093d6d03c\"" Jul 15 11:07:21.787866 systemd[1]: Started cri-containerd-c21897ade3c5f5c962b927120e74e236e9d45312ef26b8922442873651cb345f.scope. Jul 15 11:07:21.800778 systemd[1]: Started cri-containerd-60980bc14355cc5c847cddf880475562a6173341378de1ca3c3e0df093d6d03c.scope. Jul 15 11:07:21.836991 env[1221]: time="2025-07-15T11:07:21.836949194Z" level=info msg="StartContainer for \"60980bc14355cc5c847cddf880475562a6173341378de1ca3c3e0df093d6d03c\" returns successfully" Jul 15 11:07:21.851088 env[1221]: time="2025-07-15T11:07:21.851028199Z" level=info msg="StartContainer for \"c21897ade3c5f5c962b927120e74e236e9d45312ef26b8922442873651cb345f\" returns successfully" Jul 15 11:07:22.444829 kubelet[1907]: E0715 11:07:22.444794 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:22.446845 kubelet[1907]: E0715 11:07:22.446640 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:22.455610 kubelet[1907]: I0715 11:07:22.455553 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bwd6t" podStartSLOduration=18.455535899 podStartE2EDuration="18.455535899s" podCreationTimestamp="2025-07-15 11:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:07:22.455012338 +0000 UTC m=+25.215964374" watchObservedRunningTime="2025-07-15 11:07:22.455535899 +0000 UTC m=+25.216487895" Jul 15 11:07:22.480452 kubelet[1907]: I0715 11:07:22.480374 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-92pss" podStartSLOduration=18.480356107 podStartE2EDuration="18.480356107s" podCreationTimestamp="2025-07-15 11:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:07:22.465552022 +0000 UTC m=+25.226504058" watchObservedRunningTime="2025-07-15 11:07:22.480356107 +0000 UTC m=+25.241308103" Jul 15 11:07:23.448363 kubelet[1907]: E0715 11:07:23.448332 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:23.448713 kubelet[1907]: E0715 11:07:23.448398 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:23.501109 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:53254.service. Jul 15 11:07:23.542966 sshd[3291]: Accepted publickey for core from 10.0.0.1 port 53254 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:23.544485 sshd[3291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:23.548455 systemd-logind[1213]: New session 6 of user core. Jul 15 11:07:23.548948 systemd[1]: Started session-6.scope. Jul 15 11:07:23.672974 sshd[3291]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:23.675432 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:53254.service: Deactivated successfully. Jul 15 11:07:23.676137 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:07:23.676698 systemd-logind[1213]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:07:23.677585 systemd-logind[1213]: Removed session 6. Jul 15 11:07:24.450597 kubelet[1907]: E0715 11:07:24.450083 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:24.450597 kubelet[1907]: E0715 11:07:24.450210 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:28.677916 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:53262.service. Jul 15 11:07:28.712689 sshd[3306]: Accepted publickey for core from 10.0.0.1 port 53262 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:28.714243 sshd[3306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:28.718201 systemd[1]: Started session-7.scope. Jul 15 11:07:28.718584 systemd-logind[1213]: New session 7 of user core. Jul 15 11:07:28.828795 sshd[3306]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:28.831388 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:53262.service: Deactivated successfully. Jul 15 11:07:28.832148 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:07:28.832698 systemd-logind[1213]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:07:28.834049 systemd-logind[1213]: Removed session 7. Jul 15 11:07:32.204857 kubelet[1907]: I0715 11:07:32.204814 1907 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:07:32.205308 kubelet[1907]: E0715 11:07:32.205271 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:32.469003 kubelet[1907]: E0715 11:07:32.468899 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:07:33.832164 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:52268.service. Jul 15 11:07:33.867468 sshd[3322]: Accepted publickey for core from 10.0.0.1 port 52268 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:33.868841 sshd[3322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:33.872304 systemd-logind[1213]: New session 8 of user core. Jul 15 11:07:33.872750 systemd[1]: Started session-8.scope. Jul 15 11:07:33.986372 sshd[3322]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:33.988663 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:52268.service: Deactivated successfully. Jul 15 11:07:33.989389 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:07:33.989929 systemd-logind[1213]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:07:33.990575 systemd-logind[1213]: Removed session 8. Jul 15 11:07:38.990328 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:52276.service. Jul 15 11:07:39.028208 sshd[3339]: Accepted publickey for core from 10.0.0.1 port 52276 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:39.029476 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:39.033148 systemd-logind[1213]: New session 9 of user core. Jul 15 11:07:39.033577 systemd[1]: Started session-9.scope. Jul 15 11:07:39.146708 sshd[3339]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:39.150824 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:52278.service. Jul 15 11:07:39.151883 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:52276.service: Deactivated successfully. Jul 15 11:07:39.152923 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:07:39.153638 systemd-logind[1213]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:07:39.154627 systemd-logind[1213]: Removed session 9. Jul 15 11:07:39.187042 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 52278 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:39.188302 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:39.191713 systemd-logind[1213]: New session 10 of user core. Jul 15 11:07:39.192759 systemd[1]: Started session-10.scope. Jul 15 11:07:39.342822 sshd[3352]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:39.346715 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:52280.service. Jul 15 11:07:39.349183 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:52278.service: Deactivated successfully. Jul 15 11:07:39.349825 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:07:39.350401 systemd-logind[1213]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:07:39.351157 systemd-logind[1213]: Removed session 10. Jul 15 11:07:39.387261 sshd[3363]: Accepted publickey for core from 10.0.0.1 port 52280 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:39.388444 sshd[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:39.391495 systemd-logind[1213]: New session 11 of user core. Jul 15 11:07:39.392322 systemd[1]: Started session-11.scope. Jul 15 11:07:39.504531 sshd[3363]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:39.506917 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:52280.service: Deactivated successfully. Jul 15 11:07:39.507613 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:07:39.508137 systemd-logind[1213]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:07:39.508898 systemd-logind[1213]: Removed session 11. Jul 15 11:07:44.509314 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:51984.service. Jul 15 11:07:44.546659 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 51984 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:44.547984 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:44.551655 systemd-logind[1213]: New session 12 of user core. Jul 15 11:07:44.552120 systemd[1]: Started session-12.scope. Jul 15 11:07:44.656661 sshd[3377]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:44.658928 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:51984.service: Deactivated successfully. Jul 15 11:07:44.659673 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:07:44.660159 systemd-logind[1213]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:07:44.660830 systemd-logind[1213]: Removed session 12. Jul 15 11:07:49.661591 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:51992.service. Jul 15 11:07:49.700437 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 51992 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:49.701800 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:49.706051 systemd-logind[1213]: New session 13 of user core. Jul 15 11:07:49.706637 systemd[1]: Started session-13.scope. Jul 15 11:07:49.831936 sshd[3390]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:49.836274 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:51992.service: Deactivated successfully. Jul 15 11:07:49.836904 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:07:49.838292 systemd-logind[1213]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:07:49.840502 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:51998.service. Jul 15 11:07:49.842036 systemd-logind[1213]: Removed session 13. Jul 15 11:07:49.881458 sshd[3403]: Accepted publickey for core from 10.0.0.1 port 51998 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:49.882633 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:49.886880 systemd-logind[1213]: New session 14 of user core. Jul 15 11:07:49.888197 systemd[1]: Started session-14.scope. Jul 15 11:07:50.088600 sshd[3403]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:50.092577 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:52006.service. Jul 15 11:07:50.093076 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:51998.service: Deactivated successfully. Jul 15 11:07:50.093853 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:07:50.094494 systemd-logind[1213]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:07:50.095357 systemd-logind[1213]: Removed session 14. Jul 15 11:07:50.129078 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 52006 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:50.130172 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:50.133337 systemd-logind[1213]: New session 15 of user core. Jul 15 11:07:50.134152 systemd[1]: Started session-15.scope. Jul 15 11:07:50.865015 sshd[3413]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:50.868321 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:52022.service. Jul 15 11:07:50.868932 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:52006.service: Deactivated successfully. Jul 15 11:07:50.869580 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:07:50.870477 systemd-logind[1213]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:07:50.872091 systemd-logind[1213]: Removed session 15. Jul 15 11:07:50.918158 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 52022 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:50.919602 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:50.923842 systemd-logind[1213]: New session 16 of user core. Jul 15 11:07:50.924725 systemd[1]: Started session-16.scope. Jul 15 11:07:51.173457 sshd[3434]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:51.177506 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:52022.service: Deactivated successfully. Jul 15 11:07:51.178103 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:07:51.178835 systemd-logind[1213]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:07:51.180001 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:52026.service. Jul 15 11:07:51.180826 systemd-logind[1213]: Removed session 16. Jul 15 11:07:51.215511 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 52026 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:51.217158 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:51.221290 systemd-logind[1213]: New session 17 of user core. Jul 15 11:07:51.222005 systemd[1]: Started session-17.scope. Jul 15 11:07:51.348254 sshd[3448]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:51.351930 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:52026.service: Deactivated successfully. Jul 15 11:07:51.352697 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:07:51.353203 systemd-logind[1213]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:07:51.353926 systemd-logind[1213]: Removed session 17. Jul 15 11:07:56.354841 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:55510.service. Jul 15 11:07:56.392813 sshd[3465]: Accepted publickey for core from 10.0.0.1 port 55510 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:07:56.394386 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:07:56.399055 systemd[1]: Started session-18.scope. Jul 15 11:07:56.399605 systemd-logind[1213]: New session 18 of user core. Jul 15 11:07:56.519310 sshd[3465]: pam_unix(sshd:session): session closed for user core Jul 15 11:07:56.521739 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:55510.service: Deactivated successfully. Jul 15 11:07:56.522460 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:07:56.523004 systemd-logind[1213]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:07:56.523745 systemd-logind[1213]: Removed session 18. Jul 15 11:08:01.523840 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:55526.service. Jul 15 11:08:01.558624 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 55526 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:08:01.559789 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:08:01.562852 systemd-logind[1213]: New session 19 of user core. Jul 15 11:08:01.563652 systemd[1]: Started session-19.scope. Jul 15 11:08:01.665312 sshd[3480]: pam_unix(sshd:session): session closed for user core Jul 15 11:08:01.667970 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:55526.service: Deactivated successfully. Jul 15 11:08:01.668688 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:08:01.669182 systemd-logind[1213]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:08:01.669877 systemd-logind[1213]: Removed session 19. Jul 15 11:08:06.670354 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:43314.service. Jul 15 11:08:06.705342 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 43314 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:08:06.706766 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:08:06.710175 systemd-logind[1213]: New session 20 of user core. Jul 15 11:08:06.710614 systemd[1]: Started session-20.scope. Jul 15 11:08:06.815211 sshd[3495]: pam_unix(sshd:session): session closed for user core Jul 15 11:08:06.817478 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:43314.service: Deactivated successfully. Jul 15 11:08:06.818155 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:08:06.818685 systemd-logind[1213]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:08:06.819418 systemd-logind[1213]: Removed session 20. Jul 15 11:08:11.820404 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:43324.service. Jul 15 11:08:11.855610 sshd[3508]: Accepted publickey for core from 10.0.0.1 port 43324 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:08:11.857244 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:08:11.860574 systemd-logind[1213]: New session 21 of user core. Jul 15 11:08:11.861381 systemd[1]: Started session-21.scope. Jul 15 11:08:11.967423 sshd[3508]: pam_unix(sshd:session): session closed for user core Jul 15 11:08:11.971176 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:43336.service. Jul 15 11:08:11.971723 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:43324.service: Deactivated successfully. Jul 15 11:08:11.972383 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:08:11.972990 systemd-logind[1213]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:08:11.973890 systemd-logind[1213]: Removed session 21. Jul 15 11:08:12.006406 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 43336 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:08:12.007686 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:08:12.010950 systemd-logind[1213]: New session 22 of user core. Jul 15 11:08:12.011858 systemd[1]: Started session-22.scope. Jul 15 11:08:13.680964 env[1221]: time="2025-07-15T11:08:13.680915979Z" level=info msg="StopContainer for \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\" with timeout 30 (s)" Jul 15 11:08:13.681686 env[1221]: time="2025-07-15T11:08:13.681661023Z" level=info msg="Stop container \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\" with signal terminated" Jul 15 11:08:13.697030 systemd[1]: cri-containerd-f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a.scope: Deactivated successfully. Jul 15 11:08:13.716033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a-rootfs.mount: Deactivated successfully. Jul 15 11:08:13.722247 env[1221]: time="2025-07-15T11:08:13.722188585Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:08:13.726702 env[1221]: time="2025-07-15T11:08:13.726655852Z" level=info msg="shim disconnected" id=f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a Jul 15 11:08:13.726702 env[1221]: time="2025-07-15T11:08:13.726700292Z" level=warning msg="cleaning up after shim disconnected" id=f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a namespace=k8s.io Jul 15 11:08:13.726837 env[1221]: time="2025-07-15T11:08:13.726709372Z" level=info msg="cleaning up dead shim" Jul 15 11:08:13.728749 env[1221]: time="2025-07-15T11:08:13.728715344Z" level=info msg="StopContainer for \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\" with timeout 2 (s)" Jul 15 11:08:13.729085 env[1221]: time="2025-07-15T11:08:13.729061146Z" level=info msg="Stop container \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\" with signal terminated" Jul 15 11:08:13.734791 systemd-networkd[1049]: lxc_health: Link DOWN Jul 15 11:08:13.734800 systemd-networkd[1049]: lxc_health: Lost carrier Jul 15 11:08:13.738554 env[1221]: time="2025-07-15T11:08:13.738495842Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3569 runtime=io.containerd.runc.v2\n" Jul 15 11:08:13.740684 env[1221]: time="2025-07-15T11:08:13.740638455Z" level=info msg="StopContainer for \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\" returns successfully" Jul 15 11:08:13.742181 env[1221]: time="2025-07-15T11:08:13.741336899Z" level=info msg="StopPodSandbox for \"532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941\"" Jul 15 11:08:13.742181 env[1221]: time="2025-07-15T11:08:13.741412300Z" level=info msg="Container to stop \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:08:13.743088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941-shm.mount: Deactivated successfully. Jul 15 11:08:13.753290 systemd[1]: cri-containerd-532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941.scope: Deactivated successfully. Jul 15 11:08:13.767869 systemd[1]: cri-containerd-fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830.scope: Deactivated successfully. Jul 15 11:08:13.768184 systemd[1]: cri-containerd-fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830.scope: Consumed 6.386s CPU time. Jul 15 11:08:13.776876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941-rootfs.mount: Deactivated successfully. Jul 15 11:08:13.784491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830-rootfs.mount: Deactivated successfully. Jul 15 11:08:13.785614 env[1221]: time="2025-07-15T11:08:13.785569843Z" level=info msg="shim disconnected" id=532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941 Jul 15 11:08:13.785815 env[1221]: time="2025-07-15T11:08:13.785794764Z" level=warning msg="cleaning up after shim disconnected" id=532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941 namespace=k8s.io Jul 15 11:08:13.785890 env[1221]: time="2025-07-15T11:08:13.785876044Z" level=info msg="cleaning up dead shim" Jul 15 11:08:13.790028 env[1221]: time="2025-07-15T11:08:13.789985869Z" level=info msg="shim disconnected" id=fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830 Jul 15 11:08:13.790235 env[1221]: time="2025-07-15T11:08:13.790215830Z" level=warning msg="cleaning up after shim disconnected" id=fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830 namespace=k8s.io Jul 15 11:08:13.790329 env[1221]: time="2025-07-15T11:08:13.790313591Z" level=info msg="cleaning up dead shim" Jul 15 11:08:13.793130 env[1221]: time="2025-07-15T11:08:13.793093127Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3622 runtime=io.containerd.runc.v2\n" Jul 15 11:08:13.793577 env[1221]: time="2025-07-15T11:08:13.793544210Z" level=info msg="TearDown network for sandbox \"532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941\" successfully" Jul 15 11:08:13.793695 env[1221]: time="2025-07-15T11:08:13.793675491Z" level=info msg="StopPodSandbox for \"532e3d8b0fa9afd57d04c5918bced854b0a506568971d980663708d751a08941\" returns successfully" Jul 15 11:08:13.800589 env[1221]: time="2025-07-15T11:08:13.800554652Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3632 runtime=io.containerd.runc.v2\n" Jul 15 11:08:13.805737 env[1221]: time="2025-07-15T11:08:13.805699923Z" level=info msg="StopContainer for \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\" returns successfully" Jul 15 11:08:13.806244 env[1221]: time="2025-07-15T11:08:13.806211886Z" level=info msg="StopPodSandbox for \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\"" Jul 15 11:08:13.806395 env[1221]: time="2025-07-15T11:08:13.806370687Z" level=info msg="Container to stop \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:08:13.806662 env[1221]: time="2025-07-15T11:08:13.806637768Z" level=info msg="Container to stop \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:08:13.806757 env[1221]: time="2025-07-15T11:08:13.806738209Z" level=info msg="Container to stop \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:08:13.806822 env[1221]: time="2025-07-15T11:08:13.806805409Z" level=info msg="Container to stop \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:08:13.806884 env[1221]: time="2025-07-15T11:08:13.806867210Z" level=info msg="Container to stop \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:08:13.814605 systemd[1]: cri-containerd-84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47.scope: Deactivated successfully. Jul 15 11:08:13.841065 env[1221]: time="2025-07-15T11:08:13.840652411Z" level=info msg="shim disconnected" id=84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47 Jul 15 11:08:13.841065 env[1221]: time="2025-07-15T11:08:13.840705931Z" level=warning msg="cleaning up after shim disconnected" id=84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47 namespace=k8s.io Jul 15 11:08:13.841065 env[1221]: time="2025-07-15T11:08:13.840781212Z" level=info msg="cleaning up dead shim" Jul 15 11:08:13.850561 env[1221]: time="2025-07-15T11:08:13.847839134Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3666 runtime=io.containerd.runc.v2\n" Jul 15 11:08:13.850561 env[1221]: time="2025-07-15T11:08:13.848165656Z" level=info msg="TearDown network for sandbox \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" successfully" Jul 15 11:08:13.850561 env[1221]: time="2025-07-15T11:08:13.848188576Z" level=info msg="StopPodSandbox for \"84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47\" returns successfully" Jul 15 11:08:13.872570 kubelet[1907]: I0715 11:08:13.872423 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-run\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.872570 kubelet[1907]: I0715 11:08:13.872486 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-kernel\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.872570 kubelet[1907]: I0715 11:08:13.872508 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-etc-cni-netd\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.872570 kubelet[1907]: I0715 11:08:13.872539 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9vsh\" (UniqueName: \"kubernetes.io/projected/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-kube-api-access-r9vsh\") pod \"c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc\" (UID: \"c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc\") " Jul 15 11:08:13.872570 kubelet[1907]: I0715 11:08:13.872559 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-hubble-tls\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.872570 kubelet[1907]: I0715 11:08:13.872578 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2494d49-63f1-49e7-b31c-3e574bb849b9-clustermesh-secrets\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873121 kubelet[1907]: I0715 11:08:13.872619 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98j82\" (UniqueName: \"kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-kube-api-access-98j82\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873121 kubelet[1907]: I0715 11:08:13.872635 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-xtables-lock\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873121 kubelet[1907]: I0715 11:08:13.872651 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-config-path\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873121 kubelet[1907]: I0715 11:08:13.872665 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cni-path\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873121 kubelet[1907]: I0715 11:08:13.872679 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-net\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873121 kubelet[1907]: I0715 11:08:13.872699 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-cilium-config-path\") pod \"c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc\" (UID: \"c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc\") " Jul 15 11:08:13.873316 kubelet[1907]: I0715 11:08:13.872715 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-lib-modules\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873316 kubelet[1907]: I0715 11:08:13.872730 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-bpf-maps\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873316 kubelet[1907]: I0715 11:08:13.872748 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-hostproc\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.873316 kubelet[1907]: I0715 11:08:13.872765 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-cgroup\") pod \"a2494d49-63f1-49e7-b31c-3e574bb849b9\" (UID: \"a2494d49-63f1-49e7-b31c-3e574bb849b9\") " Jul 15 11:08:13.874445 kubelet[1907]: I0715 11:08:13.874036 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.874445 kubelet[1907]: I0715 11:08:13.874092 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.874445 kubelet[1907]: I0715 11:08:13.874109 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.874445 kubelet[1907]: I0715 11:08:13.874323 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.877166 kubelet[1907]: I0715 11:08:13.877122 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:08:13.877248 kubelet[1907]: I0715 11:08:13.877181 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.877248 kubelet[1907]: I0715 11:08:13.877199 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.878512 kubelet[1907]: I0715 11:08:13.878475 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2494d49-63f1-49e7-b31c-3e574bb849b9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:08:13.879066 kubelet[1907]: I0715 11:08:13.879039 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-kube-api-access-r9vsh" (OuterVolumeSpecName: "kube-api-access-r9vsh") pod "c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc" (UID: "c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc"). InnerVolumeSpecName "kube-api-access-r9vsh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:08:13.879179 kubelet[1907]: I0715 11:08:13.879149 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.879179 kubelet[1907]: I0715 11:08:13.879108 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc" (UID: "c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:08:13.879179 kubelet[1907]: I0715 11:08:13.879135 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.879273 kubelet[1907]: I0715 11:08:13.879194 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.879921 kubelet[1907]: I0715 11:08:13.879888 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:13.880913 kubelet[1907]: I0715 11:08:13.880886 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:08:13.881891 kubelet[1907]: I0715 11:08:13.881864 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-kube-api-access-98j82" (OuterVolumeSpecName: "kube-api-access-98j82") pod "a2494d49-63f1-49e7-b31c-3e574bb849b9" (UID: "a2494d49-63f1-49e7-b31c-3e574bb849b9"). InnerVolumeSpecName "kube-api-access-98j82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:08:13.973357 kubelet[1907]: I0715 11:08:13.973309 1907 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973357 kubelet[1907]: I0715 11:08:13.973346 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973357 kubelet[1907]: I0715 11:08:13.973359 1907 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98j82\" (UniqueName: \"kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-kube-api-access-98j82\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973357 kubelet[1907]: I0715 11:08:13.973367 1907 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973378 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973386 1907 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973394 1907 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973402 1907 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973409 1907 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973416 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973423 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973657 kubelet[1907]: I0715 11:08:13.973433 1907 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973822 kubelet[1907]: I0715 11:08:13.973440 1907 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2494d49-63f1-49e7-b31c-3e574bb849b9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973822 kubelet[1907]: I0715 11:08:13.973447 1907 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2494d49-63f1-49e7-b31c-3e574bb849b9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973822 kubelet[1907]: I0715 11:08:13.973455 1907 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2494d49-63f1-49e7-b31c-3e574bb849b9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:13.973822 kubelet[1907]: I0715 11:08:13.973462 1907 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9vsh\" (UniqueName: \"kubernetes.io/projected/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc-kube-api-access-r9vsh\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:14.544896 kubelet[1907]: I0715 11:08:14.544862 1907 scope.go:117] "RemoveContainer" containerID="f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a" Jul 15 11:08:14.547344 env[1221]: time="2025-07-15T11:08:14.547307534Z" level=info msg="RemoveContainer for \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\"" Jul 15 11:08:14.549019 systemd[1]: Removed slice kubepods-besteffort-podc80ef793_f5c1_4803_a7e3_9c8fe3e66ebc.slice. Jul 15 11:08:14.552390 env[1221]: time="2025-07-15T11:08:14.551862001Z" level=info msg="RemoveContainer for \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\" returns successfully" Jul 15 11:08:14.552390 env[1221]: time="2025-07-15T11:08:14.552173403Z" level=error msg="ContainerStatus for \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\": not found" Jul 15 11:08:14.552500 kubelet[1907]: I0715 11:08:14.552036 1907 scope.go:117] "RemoveContainer" containerID="f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a" Jul 15 11:08:14.552500 kubelet[1907]: E0715 11:08:14.552321 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\": not found" containerID="f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a" Jul 15 11:08:14.553306 kubelet[1907]: I0715 11:08:14.553203 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a"} err="failed to get container status \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f42111413288296bf98301939a88743c5e08d4622365b5b4358419a90fb8d54a\": not found" Jul 15 11:08:14.553819 kubelet[1907]: I0715 11:08:14.553797 1907 scope.go:117] "RemoveContainer" containerID="fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830" Jul 15 11:08:14.554896 env[1221]: time="2025-07-15T11:08:14.554858178Z" level=info msg="RemoveContainer for \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\"" Jul 15 11:08:14.557553 systemd[1]: Removed slice kubepods-burstable-poda2494d49_63f1_49e7_b31c_3e574bb849b9.slice. Jul 15 11:08:14.557709 systemd[1]: kubepods-burstable-poda2494d49_63f1_49e7_b31c_3e574bb849b9.slice: Consumed 6.623s CPU time. Jul 15 11:08:14.558464 env[1221]: time="2025-07-15T11:08:14.558426079Z" level=info msg="RemoveContainer for \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\" returns successfully" Jul 15 11:08:14.558688 kubelet[1907]: I0715 11:08:14.558662 1907 scope.go:117] "RemoveContainer" containerID="b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775" Jul 15 11:08:14.561124 env[1221]: time="2025-07-15T11:08:14.560956014Z" level=info msg="RemoveContainer for \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\"" Jul 15 11:08:14.564731 env[1221]: time="2025-07-15T11:08:14.564655155Z" level=info msg="RemoveContainer for \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\" returns successfully" Jul 15 11:08:14.564849 kubelet[1907]: I0715 11:08:14.564827 1907 scope.go:117] "RemoveContainer" containerID="3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce" Jul 15 11:08:14.565862 env[1221]: time="2025-07-15T11:08:14.565822322Z" level=info msg="RemoveContainer for \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\"" Jul 15 11:08:14.568803 env[1221]: time="2025-07-15T11:08:14.568772259Z" level=info msg="RemoveContainer for \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\" returns successfully" Jul 15 11:08:14.568942 kubelet[1907]: I0715 11:08:14.568918 1907 scope.go:117] "RemoveContainer" containerID="4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f" Jul 15 11:08:14.571224 env[1221]: time="2025-07-15T11:08:14.570038626Z" level=info msg="RemoveContainer for \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\"" Jul 15 11:08:14.573366 env[1221]: time="2025-07-15T11:08:14.573316925Z" level=info msg="RemoveContainer for \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\" returns successfully" Jul 15 11:08:14.573537 kubelet[1907]: I0715 11:08:14.573486 1907 scope.go:117] "RemoveContainer" containerID="972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8" Jul 15 11:08:14.574352 env[1221]: time="2025-07-15T11:08:14.574312971Z" level=info msg="RemoveContainer for \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\"" Jul 15 11:08:14.576745 env[1221]: time="2025-07-15T11:08:14.576709105Z" level=info msg="RemoveContainer for \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\" returns successfully" Jul 15 11:08:14.576862 kubelet[1907]: I0715 11:08:14.576832 1907 scope.go:117] "RemoveContainer" containerID="fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830" Jul 15 11:08:14.577056 env[1221]: time="2025-07-15T11:08:14.576992387Z" level=error msg="ContainerStatus for \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\": not found" Jul 15 11:08:14.577152 kubelet[1907]: E0715 11:08:14.577129 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\": not found" containerID="fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830" Jul 15 11:08:14.577182 kubelet[1907]: I0715 11:08:14.577154 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830"} err="failed to get container status \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\": rpc error: code = NotFound desc = an error occurred when try to find container \"fce5135af73ca637bb9840e208b1f1adb5fcaa1f4e399a6a7b77cbd5f7439830\": not found" Jul 15 11:08:14.577182 kubelet[1907]: I0715 11:08:14.577172 1907 scope.go:117] "RemoveContainer" containerID="b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775" Jul 15 11:08:14.577398 env[1221]: time="2025-07-15T11:08:14.577343949Z" level=error msg="ContainerStatus for \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\": not found" Jul 15 11:08:14.577497 kubelet[1907]: E0715 11:08:14.577482 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\": not found" containerID="b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775" Jul 15 11:08:14.577577 kubelet[1907]: I0715 11:08:14.577500 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775"} err="failed to get container status \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\": rpc error: code = NotFound desc = an error occurred when try to find container \"b41f63fe4eee8169ac58638f976fa4f09fe8fb6811750ea6b14a0f828ad98775\": not found" Jul 15 11:08:14.577577 kubelet[1907]: I0715 11:08:14.577512 1907 scope.go:117] "RemoveContainer" containerID="3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce" Jul 15 11:08:14.577699 env[1221]: time="2025-07-15T11:08:14.577638310Z" level=error msg="ContainerStatus for \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\": not found" Jul 15 11:08:14.577745 kubelet[1907]: E0715 11:08:14.577732 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\": not found" containerID="3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce" Jul 15 11:08:14.577773 kubelet[1907]: I0715 11:08:14.577746 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce"} err="failed to get container status \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"3647e5a6ba23504922d02f14b2436f63e43ae76e52ba0203dd7e0584886fb9ce\": not found" Jul 15 11:08:14.577773 kubelet[1907]: I0715 11:08:14.577757 1907 scope.go:117] "RemoveContainer" containerID="4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f" Jul 15 11:08:14.577919 env[1221]: time="2025-07-15T11:08:14.577872472Z" level=error msg="ContainerStatus for \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\": not found" Jul 15 11:08:14.578004 kubelet[1907]: E0715 11:08:14.577987 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\": not found" containerID="4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f" Jul 15 11:08:14.578033 kubelet[1907]: I0715 11:08:14.578009 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f"} err="failed to get container status \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4859ebacd1db750195de9d8b4640db289945d0f8dd21a6a2c2ddea54b212fc3f\": not found" Jul 15 11:08:14.578033 kubelet[1907]: I0715 11:08:14.578022 1907 scope.go:117] "RemoveContainer" containerID="972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8" Jul 15 11:08:14.578182 env[1221]: time="2025-07-15T11:08:14.578143193Z" level=error msg="ContainerStatus for \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\": not found" Jul 15 11:08:14.578249 kubelet[1907]: E0715 11:08:14.578235 1907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\": not found" containerID="972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8" Jul 15 11:08:14.578282 kubelet[1907]: I0715 11:08:14.578252 1907 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8"} err="failed to get container status \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"972326be10da9792809b9d5c0060bf9b86e328b7ed666863d8fb483a1bc52bb8\": not found" Jul 15 11:08:14.693912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47-rootfs.mount: Deactivated successfully. Jul 15 11:08:14.694011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84bb1d09a43dc4ed13353a088cfc20d988883b05dc3d2f3583dad873330cbe47-shm.mount: Deactivated successfully. Jul 15 11:08:14.694069 systemd[1]: var-lib-kubelet-pods-c80ef793\x2df5c1\x2d4803\x2da7e3\x2d9c8fe3e66ebc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9vsh.mount: Deactivated successfully. Jul 15 11:08:14.694124 systemd[1]: var-lib-kubelet-pods-a2494d49\x2d63f1\x2d49e7\x2db31c\x2d3e574bb849b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98j82.mount: Deactivated successfully. Jul 15 11:08:14.694170 systemd[1]: var-lib-kubelet-pods-a2494d49\x2d63f1\x2d49e7\x2db31c\x2d3e574bb849b9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:08:14.694227 systemd[1]: var-lib-kubelet-pods-a2494d49\x2d63f1\x2d49e7\x2db31c\x2d3e574bb849b9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:08:15.336142 kubelet[1907]: I0715 11:08:15.336094 1907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2494d49-63f1-49e7-b31c-3e574bb849b9" path="/var/lib/kubelet/pods/a2494d49-63f1-49e7-b31c-3e574bb849b9/volumes" Jul 15 11:08:15.336720 kubelet[1907]: I0715 11:08:15.336702 1907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc" path="/var/lib/kubelet/pods/c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc/volumes" Jul 15 11:08:15.633711 sshd[3520]: pam_unix(sshd:session): session closed for user core Jul 15 11:08:15.636634 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:43336.service: Deactivated successfully. Jul 15 11:08:15.637310 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:08:15.637825 systemd-logind[1213]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:08:15.638983 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:57998.service. Jul 15 11:08:15.639933 systemd-logind[1213]: Removed session 22. Jul 15 11:08:15.677061 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 57998 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:08:15.678411 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:08:15.684348 systemd[1]: Started session-23.scope. Jul 15 11:08:15.685041 systemd-logind[1213]: New session 23 of user core. Jul 15 11:08:16.551562 sshd[3684]: pam_unix(sshd:session): session closed for user core Jul 15 11:08:16.555585 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:57998.service: Deactivated successfully. Jul 15 11:08:16.556239 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:08:16.557398 systemd-logind[1213]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:08:16.558771 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:58002.service. Jul 15 11:08:16.562342 systemd-logind[1213]: Removed session 23. Jul 15 11:08:16.575558 kubelet[1907]: I0715 11:08:16.575503 1907 memory_manager.go:355] "RemoveStaleState removing state" podUID="a2494d49-63f1-49e7-b31c-3e574bb849b9" containerName="cilium-agent" Jul 15 11:08:16.575883 kubelet[1907]: I0715 11:08:16.575868 1907 memory_manager.go:355] "RemoveStaleState removing state" podUID="c80ef793-f5c1-4803-a7e3-9c8fe3e66ebc" containerName="cilium-operator" Jul 15 11:08:16.584768 systemd[1]: Created slice kubepods-burstable-pod01e4e7ee_23a0_42b6_b252_bd6b03abcc1d.slice. Jul 15 11:08:16.588018 kubelet[1907]: I0715 11:08:16.587993 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-clustermesh-secrets\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588131 kubelet[1907]: I0715 11:08:16.588117 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-config-path\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588222 kubelet[1907]: I0715 11:08:16.588209 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-ipsec-secrets\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588318 kubelet[1907]: I0715 11:08:16.588292 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-cgroup\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588389 kubelet[1907]: I0715 11:08:16.588376 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cni-path\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588465 kubelet[1907]: I0715 11:08:16.588452 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hostproc\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588561 kubelet[1907]: I0715 11:08:16.588545 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-kernel\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588659 kubelet[1907]: I0715 11:08:16.588646 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hubble-tls\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588732 kubelet[1907]: I0715 11:08:16.588720 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-run\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588795 kubelet[1907]: I0715 11:08:16.588783 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-lib-modules\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588867 kubelet[1907]: I0715 11:08:16.588855 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-xtables-lock\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.588941 kubelet[1907]: I0715 11:08:16.588929 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-bpf-maps\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.589012 kubelet[1907]: I0715 11:08:16.588999 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-net\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.589080 kubelet[1907]: I0715 11:08:16.589067 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8cz\" (UniqueName: \"kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-kube-api-access-rl8cz\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.589149 kubelet[1907]: I0715 11:08:16.589137 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-etc-cni-netd\") pod \"cilium-7hwhw\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " pod="kube-system/cilium-7hwhw" Jul 15 11:08:16.595333 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:08:16.596838 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:08:16.602146 systemd-logind[1213]: New session 24 of user core. Jul 15 11:08:16.602762 systemd[1]: Started session-24.scope. Jul 15 11:08:16.731839 sshd[3697]: pam_unix(sshd:session): session closed for user core Jul 15 11:08:16.737381 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:58012.service. Jul 15 11:08:16.737883 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:58002.service: Deactivated successfully. Jul 15 11:08:16.742119 kubelet[1907]: E0715 11:08:16.742089 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:16.743281 env[1221]: time="2025-07-15T11:08:16.742940724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hwhw,Uid:01e4e7ee-23a0-42b6-b252-bd6b03abcc1d,Namespace:kube-system,Attempt:0,}" Jul 15 11:08:16.746741 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:08:16.747623 systemd-logind[1213]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:08:16.748775 systemd-logind[1213]: Removed session 24. Jul 15 11:08:16.758514 env[1221]: time="2025-07-15T11:08:16.758380289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:08:16.758514 env[1221]: time="2025-07-15T11:08:16.758451409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:08:16.758514 env[1221]: time="2025-07-15T11:08:16.758462289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:08:16.758930 env[1221]: time="2025-07-15T11:08:16.758873772Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9 pid=3723 runtime=io.containerd.runc.v2 Jul 15 11:08:16.769902 systemd[1]: Started cri-containerd-aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9.scope. Jul 15 11:08:16.776578 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 58012 ssh2: RSA SHA256:j1wDR6gweCSngZuaE8kL1fszhDF+Tuwb03sE4/bYQBA Jul 15 11:08:16.778054 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:08:16.782899 systemd-logind[1213]: New session 25 of user core. Jul 15 11:08:16.783739 systemd[1]: Started session-25.scope. Jul 15 11:08:16.807245 env[1221]: time="2025-07-15T11:08:16.807132957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hwhw,Uid:01e4e7ee-23a0-42b6-b252-bd6b03abcc1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9\"" Jul 15 11:08:16.807842 kubelet[1907]: E0715 11:08:16.807813 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:16.814999 env[1221]: time="2025-07-15T11:08:16.814960120Z" level=info msg="CreateContainer within sandbox \"aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:08:16.825846 env[1221]: time="2025-07-15T11:08:16.825803779Z" level=info msg="CreateContainer within sandbox \"aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\"" Jul 15 11:08:16.826659 env[1221]: time="2025-07-15T11:08:16.826628504Z" level=info msg="StartContainer for \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\"" Jul 15 11:08:16.840463 systemd[1]: Started cri-containerd-da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9.scope. Jul 15 11:08:16.865982 systemd[1]: cri-containerd-da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9.scope: Deactivated successfully. Jul 15 11:08:16.887356 env[1221]: time="2025-07-15T11:08:16.882884453Z" level=info msg="shim disconnected" id=da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9 Jul 15 11:08:16.887356 env[1221]: time="2025-07-15T11:08:16.882944973Z" level=warning msg="cleaning up after shim disconnected" id=da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9 namespace=k8s.io Jul 15 11:08:16.887356 env[1221]: time="2025-07-15T11:08:16.882955773Z" level=info msg="cleaning up dead shim" Jul 15 11:08:16.892567 env[1221]: time="2025-07-15T11:08:16.892135504Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3787 runtime=io.containerd.runc.v2\ntime=\"2025-07-15T11:08:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 15 11:08:16.892674 env[1221]: time="2025-07-15T11:08:16.892470385Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed" Jul 15 11:08:16.892898 env[1221]: time="2025-07-15T11:08:16.892833787Z" level=error msg="Failed to pipe stderr of container \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\"" error="reading from a closed fifo" Jul 15 11:08:16.893482 env[1221]: time="2025-07-15T11:08:16.893434431Z" level=error msg="Failed to pipe stdout of container \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\"" error="reading from a closed fifo" Jul 15 11:08:16.895420 env[1221]: time="2025-07-15T11:08:16.895344321Z" level=error msg="StartContainer for \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 15 11:08:16.895737 kubelet[1907]: E0715 11:08:16.895633 1907 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9" Jul 15 11:08:16.897099 kubelet[1907]: E0715 11:08:16.896233 1907 kuberuntime_manager.go:1341] "Unhandled Error" err=< Jul 15 11:08:16.897099 kubelet[1907]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 15 11:08:16.897099 kubelet[1907]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 15 11:08:16.897099 kubelet[1907]: rm /hostbin/cilium-mount Jul 15 11:08:16.897368 kubelet[1907]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rl8cz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7hwhw_kube-system(01e4e7ee-23a0-42b6-b252-bd6b03abcc1d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 15 11:08:16.897368 kubelet[1907]: > logger="UnhandledError" Jul 15 11:08:16.897500 kubelet[1907]: E0715 11:08:16.897392 1907 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7hwhw" podUID="01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" Jul 15 11:08:17.373463 kubelet[1907]: E0715 11:08:17.373424 1907 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:08:17.562993 env[1221]: time="2025-07-15T11:08:17.562947426Z" level=info msg="StopPodSandbox for \"aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9\"" Jul 15 11:08:17.563137 env[1221]: time="2025-07-15T11:08:17.563002267Z" level=info msg="Container to stop \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:08:17.569091 systemd[1]: cri-containerd-aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9.scope: Deactivated successfully. Jul 15 11:08:17.592186 env[1221]: time="2025-07-15T11:08:17.592144382Z" level=info msg="shim disconnected" id=aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9 Jul 15 11:08:17.592341 env[1221]: time="2025-07-15T11:08:17.592187823Z" level=warning msg="cleaning up after shim disconnected" id=aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9 namespace=k8s.io Jul 15 11:08:17.592341 env[1221]: time="2025-07-15T11:08:17.592198543Z" level=info msg="cleaning up dead shim" Jul 15 11:08:17.598506 env[1221]: time="2025-07-15T11:08:17.598472056Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3817 runtime=io.containerd.runc.v2\n" Jul 15 11:08:17.598836 env[1221]: time="2025-07-15T11:08:17.598799458Z" level=info msg="TearDown network for sandbox \"aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9\" successfully" Jul 15 11:08:17.598836 env[1221]: time="2025-07-15T11:08:17.598826978Z" level=info msg="StopPodSandbox for \"aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9\" returns successfully" Jul 15 11:08:17.694398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa1c24d86c18989b90022847ca53a06c85b76422a1a17f6cd3751aea6aedc5f9-shm.mount: Deactivated successfully. Jul 15 11:08:17.697797 kubelet[1907]: I0715 11:08:17.697768 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-clustermesh-secrets\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697801 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-cgroup\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697821 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-config-path\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697837 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-lib-modules\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697852 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-net\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697880 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl8cz\" (UniqueName: \"kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-kube-api-access-rl8cz\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697895 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cni-path\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697910 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-xtables-lock\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697928 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-run\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697944 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-kernel\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697959 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-bpf-maps\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697977 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-ipsec-secrets\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.697992 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hostproc\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.698007 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hubble-tls\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.698020 1907 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-etc-cni-netd\") pod \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\" (UID: \"01e4e7ee-23a0-42b6-b252-bd6b03abcc1d\") " Jul 15 11:08:17.698124 kubelet[1907]: I0715 11:08:17.698079 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698495 kubelet[1907]: I0715 11:08:17.698331 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698495 kubelet[1907]: I0715 11:08:17.698366 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698581 kubelet[1907]: I0715 11:08:17.698563 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698581 kubelet[1907]: I0715 11:08:17.698569 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698631 kubelet[1907]: I0715 11:08:17.698590 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698631 kubelet[1907]: I0715 11:08:17.698596 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698631 kubelet[1907]: I0715 11:08:17.698610 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.698631 kubelet[1907]: I0715 11:08:17.698629 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hostproc" (OuterVolumeSpecName: "hostproc") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.700493 kubelet[1907]: I0715 11:08:17.698839 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cni-path" (OuterVolumeSpecName: "cni-path") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:08:17.701966 systemd[1]: var-lib-kubelet-pods-01e4e7ee\x2d23a0\x2d42b6\x2db252\x2dbd6b03abcc1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drl8cz.mount: Deactivated successfully. Jul 15 11:08:17.702048 systemd[1]: var-lib-kubelet-pods-01e4e7ee\x2d23a0\x2d42b6\x2db252\x2dbd6b03abcc1d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 15 11:08:17.702097 systemd[1]: var-lib-kubelet-pods-01e4e7ee\x2d23a0\x2d42b6\x2db252\x2dbd6b03abcc1d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:08:17.703641 kubelet[1907]: I0715 11:08:17.703616 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:08:17.703874 kubelet[1907]: I0715 11:08:17.703855 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:08:17.704164 systemd[1]: var-lib-kubelet-pods-01e4e7ee\x2d23a0\x2d42b6\x2db252\x2dbd6b03abcc1d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:08:17.704402 kubelet[1907]: I0715 11:08:17.704379 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-kube-api-access-rl8cz" (OuterVolumeSpecName: "kube-api-access-rl8cz") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "kube-api-access-rl8cz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:08:17.704577 kubelet[1907]: I0715 11:08:17.704552 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:08:17.704689 kubelet[1907]: I0715 11:08:17.704670 1907 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" (UID: "01e4e7ee-23a0-42b6-b252-bd6b03abcc1d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:08:17.798894 kubelet[1907]: I0715 11:08:17.798865 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799028 kubelet[1907]: I0715 11:08:17.799015 1907 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799114 kubelet[1907]: I0715 11:08:17.799103 1907 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799178 kubelet[1907]: I0715 11:08:17.799169 1907 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799244 kubelet[1907]: I0715 11:08:17.799233 1907 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799342 kubelet[1907]: I0715 11:08:17.799332 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799421 kubelet[1907]: I0715 11:08:17.799411 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799491 kubelet[1907]: I0715 11:08:17.799481 1907 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799581 kubelet[1907]: I0715 11:08:17.799571 1907 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799664 kubelet[1907]: I0715 11:08:17.799652 1907 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rl8cz\" (UniqueName: \"kubernetes.io/projected/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-kube-api-access-rl8cz\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799737 kubelet[1907]: I0715 11:08:17.799727 1907 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799801 kubelet[1907]: I0715 11:08:17.799784 1907 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799864 kubelet[1907]: I0715 11:08:17.799854 1907 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799931 kubelet[1907]: I0715 11:08:17.799921 1907 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:17.799995 kubelet[1907]: I0715 11:08:17.799976 1907 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:08:18.565381 kubelet[1907]: I0715 11:08:18.565347 1907 scope.go:117] "RemoveContainer" containerID="da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9" Jul 15 11:08:18.569100 systemd[1]: Removed slice kubepods-burstable-pod01e4e7ee_23a0_42b6_b252_bd6b03abcc1d.slice. Jul 15 11:08:18.570411 env[1221]: time="2025-07-15T11:08:18.570375253Z" level=info msg="RemoveContainer for \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\"" Jul 15 11:08:18.577153 env[1221]: time="2025-07-15T11:08:18.577118408Z" level=info msg="RemoveContainer for \"da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9\" returns successfully" Jul 15 11:08:18.608904 kubelet[1907]: I0715 11:08:18.608856 1907 memory_manager.go:355] "RemoveStaleState removing state" podUID="01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" containerName="mount-cgroup" Jul 15 11:08:18.611428 kubelet[1907]: I0715 11:08:18.611386 1907 status_manager.go:890] "Failed to get status for pod" podUID="fc730126-4cce-4071-933f-5658fea95d57" pod="kube-system/cilium-rj8cx" err="pods \"cilium-rj8cx\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 15 11:08:18.614131 kubelet[1907]: W0715 11:08:18.614100 1907 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 15 11:08:18.614228 kubelet[1907]: E0715 11:08:18.614147 1907 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 15 11:08:18.614228 kubelet[1907]: W0715 11:08:18.614101 1907 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 15 11:08:18.614228 kubelet[1907]: E0715 11:08:18.614179 1907 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 15 11:08:18.614228 kubelet[1907]: W0715 11:08:18.614100 1907 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 15 11:08:18.614228 kubelet[1907]: E0715 11:08:18.614195 1907 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 15 11:08:18.615010 systemd[1]: Created slice kubepods-burstable-podfc730126_4cce_4071_933f_5658fea95d57.slice. Jul 15 11:08:18.705945 kubelet[1907]: I0715 11:08:18.705906 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-host-proc-sys-kernel\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.705945 kubelet[1907]: I0715 11:08:18.705950 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-cilium-run\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.705970 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-bpf-maps\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.705987 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc730126-4cce-4071-933f-5658fea95d57-clustermesh-secrets\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706006 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-cilium-cgroup\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706022 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-xtables-lock\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706037 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-lib-modules\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706053 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc730126-4cce-4071-933f-5658fea95d57-hubble-tls\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706073 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-hostproc\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706109 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-etc-cni-netd\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706150 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc730126-4cce-4071-933f-5658fea95d57-cilium-ipsec-secrets\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706193 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-cni-path\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706216 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc730126-4cce-4071-933f-5658fea95d57-host-proc-sys-net\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706235 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndb9\" (UniqueName: \"kubernetes.io/projected/fc730126-4cce-4071-933f-5658fea95d57-kube-api-access-6ndb9\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:18.706301 kubelet[1907]: I0715 11:08:18.706260 1907 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc730126-4cce-4071-933f-5658fea95d57-cilium-config-path\") pod \"cilium-rj8cx\" (UID: \"fc730126-4cce-4071-933f-5658fea95d57\") " pod="kube-system/cilium-rj8cx" Jul 15 11:08:19.332791 kubelet[1907]: I0715 11:08:19.332713 1907 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T11:08:19Z","lastTransitionTime":"2025-07-15T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 11:08:19.335084 kubelet[1907]: I0715 11:08:19.335054 1907 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e4e7ee-23a0-42b6-b252-bd6b03abcc1d" path="/var/lib/kubelet/pods/01e4e7ee-23a0-42b6-b252-bd6b03abcc1d/volumes" Jul 15 11:08:19.809416 kubelet[1907]: E0715 11:08:19.809376 1907 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 15 11:08:19.809416 kubelet[1907]: E0715 11:08:19.809396 1907 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 15 11:08:19.809773 kubelet[1907]: E0715 11:08:19.809461 1907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc730126-4cce-4071-933f-5658fea95d57-clustermesh-secrets podName:fc730126-4cce-4071-933f-5658fea95d57 nodeName:}" failed. No retries permitted until 2025-07-15 11:08:20.309440632 +0000 UTC m=+83.070392628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/fc730126-4cce-4071-933f-5658fea95d57-clustermesh-secrets") pod "cilium-rj8cx" (UID: "fc730126-4cce-4071-933f-5658fea95d57") : failed to sync secret cache: timed out waiting for the condition Jul 15 11:08:19.809773 kubelet[1907]: E0715 11:08:19.809479 1907 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc730126-4cce-4071-933f-5658fea95d57-cilium-ipsec-secrets podName:fc730126-4cce-4071-933f-5658fea95d57 nodeName:}" failed. No retries permitted until 2025-07-15 11:08:20.309472233 +0000 UTC m=+83.070424229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/fc730126-4cce-4071-933f-5658fea95d57-cilium-ipsec-secrets") pod "cilium-rj8cx" (UID: "fc730126-4cce-4071-933f-5658fea95d57") : failed to sync secret cache: timed out waiting for the condition Jul 15 11:08:19.988975 kubelet[1907]: W0715 11:08:19.988908 1907 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01e4e7ee_23a0_42b6_b252_bd6b03abcc1d.slice/cri-containerd-da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9.scope WatchSource:0}: container "da4b2f2cfef6f943c6b8266437ae50e8e5b38ebdea3d2873f5e8ca45081da8a9" in namespace "k8s.io": not found Jul 15 11:08:20.417265 kubelet[1907]: E0715 11:08:20.417214 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:20.418029 env[1221]: time="2025-07-15T11:08:20.417674140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj8cx,Uid:fc730126-4cce-4071-933f-5658fea95d57,Namespace:kube-system,Attempt:0,}" Jul 15 11:08:20.431607 env[1221]: time="2025-07-15T11:08:20.431545849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:08:20.431607 env[1221]: time="2025-07-15T11:08:20.431584169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:08:20.431607 env[1221]: time="2025-07-15T11:08:20.431594649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:08:20.431743 env[1221]: time="2025-07-15T11:08:20.431705530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1 pid=3846 runtime=io.containerd.runc.v2 Jul 15 11:08:20.441217 systemd[1]: Started cri-containerd-9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1.scope. Jul 15 11:08:20.467114 env[1221]: time="2025-07-15T11:08:20.467077184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rj8cx,Uid:fc730126-4cce-4071-933f-5658fea95d57,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\"" Jul 15 11:08:20.467928 kubelet[1907]: E0715 11:08:20.467896 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:20.471997 env[1221]: time="2025-07-15T11:08:20.471964128Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:08:20.481565 env[1221]: time="2025-07-15T11:08:20.481513616Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54d6516a1596aa61f27a5882351b3c6e01b4469e45aa7207ebd8846438280d73\"" Jul 15 11:08:20.482712 env[1221]: time="2025-07-15T11:08:20.482677901Z" level=info msg="StartContainer for \"54d6516a1596aa61f27a5882351b3c6e01b4469e45aa7207ebd8846438280d73\"" Jul 15 11:08:20.495666 systemd[1]: Started cri-containerd-54d6516a1596aa61f27a5882351b3c6e01b4469e45aa7207ebd8846438280d73.scope. Jul 15 11:08:20.527739 env[1221]: time="2025-07-15T11:08:20.527684043Z" level=info msg="StartContainer for \"54d6516a1596aa61f27a5882351b3c6e01b4469e45aa7207ebd8846438280d73\" returns successfully" Jul 15 11:08:20.533361 systemd[1]: cri-containerd-54d6516a1596aa61f27a5882351b3c6e01b4469e45aa7207ebd8846438280d73.scope: Deactivated successfully. Jul 15 11:08:20.554983 env[1221]: time="2025-07-15T11:08:20.554940178Z" level=info msg="shim disconnected" id=54d6516a1596aa61f27a5882351b3c6e01b4469e45aa7207ebd8846438280d73 Jul 15 11:08:20.555192 env[1221]: time="2025-07-15T11:08:20.555171219Z" level=warning msg="cleaning up after shim disconnected" id=54d6516a1596aa61f27a5882351b3c6e01b4469e45aa7207ebd8846438280d73 namespace=k8s.io Jul 15 11:08:20.555274 env[1221]: time="2025-07-15T11:08:20.555259940Z" level=info msg="cleaning up dead shim" Jul 15 11:08:20.561414 env[1221]: time="2025-07-15T11:08:20.561383290Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3929 runtime=io.containerd.runc.v2\n" Jul 15 11:08:20.570415 kubelet[1907]: E0715 11:08:20.570231 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:20.573307 env[1221]: time="2025-07-15T11:08:20.573276189Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:08:20.585973 env[1221]: time="2025-07-15T11:08:20.585926371Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"36cdc2e2b85358c0273aee3358f70f4243db9012f03c920bdc1eca7e6ec20f74\"" Jul 15 11:08:20.586424 env[1221]: time="2025-07-15T11:08:20.586397013Z" level=info msg="StartContainer for \"36cdc2e2b85358c0273aee3358f70f4243db9012f03c920bdc1eca7e6ec20f74\"" Jul 15 11:08:20.599667 systemd[1]: Started cri-containerd-36cdc2e2b85358c0273aee3358f70f4243db9012f03c920bdc1eca7e6ec20f74.scope. Jul 15 11:08:20.633306 env[1221]: time="2025-07-15T11:08:20.632787042Z" level=info msg="StartContainer for \"36cdc2e2b85358c0273aee3358f70f4243db9012f03c920bdc1eca7e6ec20f74\" returns successfully" Jul 15 11:08:20.639160 systemd[1]: cri-containerd-36cdc2e2b85358c0273aee3358f70f4243db9012f03c920bdc1eca7e6ec20f74.scope: Deactivated successfully. Jul 15 11:08:20.667030 env[1221]: time="2025-07-15T11:08:20.666978531Z" level=info msg="shim disconnected" id=36cdc2e2b85358c0273aee3358f70f4243db9012f03c920bdc1eca7e6ec20f74 Jul 15 11:08:20.667030 env[1221]: time="2025-07-15T11:08:20.667022011Z" level=warning msg="cleaning up after shim disconnected" id=36cdc2e2b85358c0273aee3358f70f4243db9012f03c920bdc1eca7e6ec20f74 namespace=k8s.io Jul 15 11:08:20.667030 env[1221]: time="2025-07-15T11:08:20.667031691Z" level=info msg="cleaning up dead shim" Jul 15 11:08:20.673648 env[1221]: time="2025-07-15T11:08:20.673558324Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3991 runtime=io.containerd.runc.v2\n" Jul 15 11:08:21.322116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012724606.mount: Deactivated successfully. Jul 15 11:08:21.573795 kubelet[1907]: E0715 11:08:21.573549 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:21.581914 env[1221]: time="2025-07-15T11:08:21.581855613Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:08:21.596678 env[1221]: time="2025-07-15T11:08:21.596637164Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c\"" Jul 15 11:08:21.597180 env[1221]: time="2025-07-15T11:08:21.597158446Z" level=info msg="StartContainer for \"b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c\"" Jul 15 11:08:21.612964 systemd[1]: Started cri-containerd-b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c.scope. Jul 15 11:08:21.641440 systemd[1]: cri-containerd-b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c.scope: Deactivated successfully. Jul 15 11:08:21.642116 env[1221]: time="2025-07-15T11:08:21.642065182Z" level=info msg="StartContainer for \"b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c\" returns successfully" Jul 15 11:08:21.660903 env[1221]: time="2025-07-15T11:08:21.660861833Z" level=info msg="shim disconnected" id=b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c Jul 15 11:08:21.660903 env[1221]: time="2025-07-15T11:08:21.660903513Z" level=warning msg="cleaning up after shim disconnected" id=b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c namespace=k8s.io Jul 15 11:08:21.661071 env[1221]: time="2025-07-15T11:08:21.660913313Z" level=info msg="cleaning up dead shim" Jul 15 11:08:21.667235 env[1221]: time="2025-07-15T11:08:21.667200983Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4049 runtime=io.containerd.runc.v2\n" Jul 15 11:08:22.322164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b678fda77549dc294ea5a01699991631ebd84fbc4e1eaa0d7214e95e0c57603c-rootfs.mount: Deactivated successfully. Jul 15 11:08:22.375022 kubelet[1907]: E0715 11:08:22.374987 1907 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:08:22.576558 kubelet[1907]: E0715 11:08:22.576426 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:22.580279 env[1221]: time="2025-07-15T11:08:22.580225301Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:08:22.592590 env[1221]: time="2025-07-15T11:08:22.589839666Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385\"" Jul 15 11:08:22.592590 env[1221]: time="2025-07-15T11:08:22.590510149Z" level=info msg="StartContainer for \"b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385\"" Jul 15 11:08:22.606893 systemd[1]: Started cri-containerd-b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385.scope. Jul 15 11:08:22.638307 systemd[1]: cri-containerd-b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385.scope: Deactivated successfully. Jul 15 11:08:22.639023 env[1221]: time="2025-07-15T11:08:22.638965536Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc730126_4cce_4071_933f_5658fea95d57.slice/cri-containerd-b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385.scope/memory.events\": no such file or directory" Jul 15 11:08:22.640555 env[1221]: time="2025-07-15T11:08:22.640497063Z" level=info msg="StartContainer for \"b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385\" returns successfully" Jul 15 11:08:22.658735 env[1221]: time="2025-07-15T11:08:22.658693588Z" level=info msg="shim disconnected" id=b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385 Jul 15 11:08:22.658918 env[1221]: time="2025-07-15T11:08:22.658899989Z" level=warning msg="cleaning up after shim disconnected" id=b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385 namespace=k8s.io Jul 15 11:08:22.658998 env[1221]: time="2025-07-15T11:08:22.658984470Z" level=info msg="cleaning up dead shim" Jul 15 11:08:22.666139 env[1221]: time="2025-07-15T11:08:22.666107103Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4106 runtime=io.containerd.runc.v2\n" Jul 15 11:08:23.322209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1961106ea0d8dac2b8dc72299b55186765333ca9423f3e7cbb290e81e1a9385-rootfs.mount: Deactivated successfully. Jul 15 11:08:23.582861 kubelet[1907]: E0715 11:08:23.582762 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:23.585420 env[1221]: time="2025-07-15T11:08:23.585381938Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:08:23.643136 env[1221]: time="2025-07-15T11:08:23.643088362Z" level=info msg="CreateContainer within sandbox \"9dce82bc49b3ace745f05415acc172093656f93cd4eec15c3795013c6e6554c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"117d4439369fb30c18c00db3b8a9330c59f8c07e8b5ecb47338e606719c4377f\"" Jul 15 11:08:23.643672 env[1221]: time="2025-07-15T11:08:23.643646204Z" level=info msg="StartContainer for \"117d4439369fb30c18c00db3b8a9330c59f8c07e8b5ecb47338e606719c4377f\"" Jul 15 11:08:23.661757 systemd[1]: Started cri-containerd-117d4439369fb30c18c00db3b8a9330c59f8c07e8b5ecb47338e606719c4377f.scope. Jul 15 11:08:23.692463 env[1221]: time="2025-07-15T11:08:23.692416067Z" level=info msg="StartContainer for \"117d4439369fb30c18c00db3b8a9330c59f8c07e8b5ecb47338e606719c4377f\" returns successfully" Jul 15 11:08:23.931557 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 15 11:08:24.587375 kubelet[1907]: E0715 11:08:24.587345 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:24.609626 kubelet[1907]: I0715 11:08:24.609566 1907 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rj8cx" podStartSLOduration=6.609550901 podStartE2EDuration="6.609550901s" podCreationTimestamp="2025-07-15 11:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:08:24.60725521 +0000 UTC m=+87.368207166" watchObservedRunningTime="2025-07-15 11:08:24.609550901 +0000 UTC m=+87.370502897" Jul 15 11:08:25.146539 systemd[1]: run-containerd-runc-k8s.io-117d4439369fb30c18c00db3b8a9330c59f8c07e8b5ecb47338e606719c4377f-runc.mDFhuA.mount: Deactivated successfully. Jul 15 11:08:26.418616 kubelet[1907]: E0715 11:08:26.418574 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:26.734654 systemd-networkd[1049]: lxc_health: Link UP Jul 15 11:08:26.740374 systemd-networkd[1049]: lxc_health: Gained carrier Jul 15 11:08:26.740543 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:08:27.334587 kubelet[1907]: E0715 11:08:27.334533 1907 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bwd6t" podUID="cc1ee6a0-14ed-4c6f-a747-5d4cc6473456" Jul 15 11:08:27.829659 systemd-networkd[1049]: lxc_health: Gained IPv6LL Jul 15 11:08:28.419735 kubelet[1907]: E0715 11:08:28.419701 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:28.595030 kubelet[1907]: E0715 11:08:28.594983 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:29.334018 kubelet[1907]: E0715 11:08:29.333972 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:29.334177 kubelet[1907]: E0715 11:08:29.334044 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:29.334271 kubelet[1907]: E0715 11:08:29.334242 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:29.597092 kubelet[1907]: E0715 11:08:29.596967 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:32.333968 kubelet[1907]: E0715 11:08:32.333925 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:32.334476 kubelet[1907]: E0715 11:08:32.334454 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:33.333842 kubelet[1907]: E0715 11:08:33.333806 1907 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:08:33.706498 sshd[3713]: pam_unix(sshd:session): session closed for user core Jul 15 11:08:33.709550 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:58012.service: Deactivated successfully. Jul 15 11:08:33.710239 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:08:33.710857 systemd-logind[1213]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:08:33.711617 systemd-logind[1213]: Removed session 25.