May 15 10:03:11.733923 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 10:03:11.733942 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 09:09:56 -00 2025 May 15 10:03:11.733951 kernel: efi: EFI v2.70 by EDK II May 15 10:03:11.733956 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 15 10:03:11.733961 kernel: random: crng init done May 15 10:03:11.733967 kernel: ACPI: Early table checksum verification disabled May 15 10:03:11.733973 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 15 10:03:11.733980 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 10:03:11.733985 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.733990 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.733996 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.734001 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.734006 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.734012 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.734019 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.734025 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.734031 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:03:11.734037 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 10:03:11.734042 kernel: NUMA: Failed to initialise from firmware May 15 10:03:11.734048 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:03:11.734054 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 15 10:03:11.734060 kernel: Zone ranges: May 15 10:03:11.734066 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:03:11.734072 kernel: DMA32 empty May 15 10:03:11.734078 kernel: Normal empty May 15 10:03:11.734084 kernel: Movable zone start for each node May 15 10:03:11.734089 kernel: Early memory node ranges May 15 10:03:11.734095 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 15 10:03:11.734100 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 15 10:03:11.734106 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 15 10:03:11.734111 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 15 10:03:11.734117 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 15 10:03:11.734123 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 15 10:03:11.734129 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 15 10:03:11.734135 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:03:11.734141 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 10:03:11.734147 kernel: psci: probing for conduit method from ACPI. May 15 10:03:11.734152 kernel: psci: PSCIv1.1 detected in firmware. May 15 10:03:11.734158 kernel: psci: Using standard PSCI v0.2 function IDs May 15 10:03:11.734163 kernel: psci: Trusted OS migration not required May 15 10:03:11.734172 kernel: psci: SMC Calling Convention v1.1 May 15 10:03:11.734178 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 10:03:11.734185 kernel: ACPI: SRAT not present May 15 10:03:11.734192 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 15 10:03:11.734220 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 15 10:03:11.734226 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 10:03:11.734233 kernel: Detected PIPT I-cache on CPU0 May 15 10:03:11.734239 kernel: CPU features: detected: GIC system register CPU interface May 15 10:03:11.734245 kernel: CPU features: detected: Hardware dirty bit management May 15 10:03:11.734251 kernel: CPU features: detected: Spectre-v4 May 15 10:03:11.734257 kernel: CPU features: detected: Spectre-BHB May 15 10:03:11.734265 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 10:03:11.734271 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 10:03:11.734278 kernel: CPU features: detected: ARM erratum 1418040 May 15 10:03:11.734283 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 10:03:11.734289 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 10:03:11.734296 kernel: Policy zone: DMA May 15 10:03:11.734303 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:03:11.734309 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:03:11.734315 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 10:03:11.734321 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:03:11.734327 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:03:11.734335 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) May 15 10:03:11.734342 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 10:03:11.734347 kernel: trace event string verifier disabled May 15 10:03:11.734353 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 10:03:11.734360 kernel: rcu: RCU event tracing is enabled. May 15 10:03:11.734366 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 10:03:11.734372 kernel: Trampoline variant of Tasks RCU enabled. May 15 10:03:11.734378 kernel: Tracing variant of Tasks RCU enabled. May 15 10:03:11.734384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:03:11.734390 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 10:03:11.734396 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 10:03:11.734404 kernel: GICv3: 256 SPIs implemented May 15 10:03:11.734410 kernel: GICv3: 0 Extended SPIs implemented May 15 10:03:11.734416 kernel: GICv3: Distributor has no Range Selector support May 15 10:03:11.734422 kernel: Root IRQ handler: gic_handle_irq May 15 10:03:11.734428 kernel: GICv3: 16 PPIs implemented May 15 10:03:11.734434 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 10:03:11.734440 kernel: ACPI: SRAT not present May 15 10:03:11.734446 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 10:03:11.734453 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 15 10:03:11.734459 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 15 10:03:11.734465 kernel: GICv3: using LPI property table @0x00000000400d0000 May 15 10:03:11.734472 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 15 10:03:11.734479 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:03:11.734485 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 10:03:11.734492 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 10:03:11.734499 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 10:03:11.734505 kernel: arm-pv: using stolen time PV May 15 10:03:11.734512 kernel: Console: colour dummy device 80x25 May 15 10:03:11.734518 kernel: ACPI: Core revision 20210730 May 15 10:03:11.734525 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 10:03:11.734531 kernel: pid_max: default: 32768 minimum: 301 May 15 10:03:11.734537 kernel: LSM: Security Framework initializing May 15 10:03:11.734545 kernel: SELinux: Initializing. May 15 10:03:11.734551 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:03:11.734558 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:03:11.734564 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 10:03:11.734571 kernel: rcu: Hierarchical SRCU implementation. May 15 10:03:11.734577 kernel: Platform MSI: ITS@0x8080000 domain created May 15 10:03:11.734583 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 10:03:11.734589 kernel: Remapping and enabling EFI services. May 15 10:03:11.734595 kernel: smp: Bringing up secondary CPUs ... May 15 10:03:11.734603 kernel: Detected PIPT I-cache on CPU1 May 15 10:03:11.734610 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 10:03:11.734617 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 15 10:03:11.734623 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:03:11.734629 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 10:03:11.734636 kernel: Detected PIPT I-cache on CPU2 May 15 10:03:11.734642 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 10:03:11.734648 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 15 10:03:11.734655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:03:11.734661 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 10:03:11.734668 kernel: Detected PIPT I-cache on CPU3 May 15 10:03:11.734674 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 10:03:11.734681 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 15 10:03:11.734687 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:03:11.734697 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 10:03:11.734705 kernel: smp: Brought up 1 node, 4 CPUs May 15 10:03:11.734711 kernel: SMP: Total of 4 processors activated. May 15 10:03:11.734717 kernel: CPU features: detected: 32-bit EL0 Support May 15 10:03:11.734724 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 10:03:11.734730 kernel: CPU features: detected: Common not Private translations May 15 10:03:11.734737 kernel: CPU features: detected: CRC32 instructions May 15 10:03:11.734744 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 10:03:11.734751 kernel: CPU features: detected: LSE atomic instructions May 15 10:03:11.734758 kernel: CPU features: detected: Privileged Access Never May 15 10:03:11.734764 kernel: CPU features: detected: RAS Extension Support May 15 10:03:11.734771 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 10:03:11.734777 kernel: CPU: All CPU(s) started at EL1 May 15 10:03:11.734785 kernel: alternatives: patching kernel code May 15 10:03:11.734791 kernel: devtmpfs: initialized May 15 10:03:11.734797 kernel: KASLR enabled May 15 10:03:11.734804 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:03:11.734810 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 10:03:11.734817 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:03:11.734823 kernel: SMBIOS 3.0.0 present. May 15 10:03:11.734830 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 15 10:03:11.734836 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:03:11.734844 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 10:03:11.734850 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 10:03:11.734857 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 10:03:11.734864 kernel: audit: initializing netlink subsys (disabled) May 15 10:03:11.734870 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 May 15 10:03:11.734877 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:03:11.734884 kernel: cpuidle: using governor menu May 15 10:03:11.734890 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 10:03:11.734896 kernel: ASID allocator initialised with 32768 entries May 15 10:03:11.734905 kernel: ACPI: bus type PCI registered May 15 10:03:11.734911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:03:11.734917 kernel: Serial: AMBA PL011 UART driver May 15 10:03:11.734924 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:03:11.734930 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 15 10:03:11.734937 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:03:11.734943 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 15 10:03:11.734950 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:03:11.734956 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 10:03:11.734964 kernel: ACPI: Added _OSI(Module Device) May 15 10:03:11.734971 kernel: ACPI: Added _OSI(Processor Device) May 15 10:03:11.734977 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:03:11.734984 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:03:11.734990 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:03:11.734997 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:03:11.735003 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:03:11.735010 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:03:11.735016 kernel: ACPI: Interpreter enabled May 15 10:03:11.735024 kernel: ACPI: Using GIC for interrupt routing May 15 10:03:11.735035 kernel: ACPI: MCFG table detected, 1 entries May 15 10:03:11.735041 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 10:03:11.735048 kernel: printk: console [ttyAMA0] enabled May 15 10:03:11.735055 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 10:03:11.735178 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:03:11.735283 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 10:03:11.735348 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 10:03:11.735405 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 10:03:11.735461 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 10:03:11.735469 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 10:03:11.735476 kernel: PCI host bridge to bus 0000:00 May 15 10:03:11.735541 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 10:03:11.735594 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 10:03:11.735646 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 10:03:11.735700 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 10:03:11.735772 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 10:03:11.735841 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 10:03:11.735901 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 10:03:11.735960 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 10:03:11.736019 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:03:11.736081 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:03:11.736141 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 10:03:11.738523 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 10:03:11.738610 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 10:03:11.738664 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 10:03:11.738717 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 10:03:11.738726 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 10:03:11.738734 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 10:03:11.738745 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 10:03:11.738752 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 10:03:11.738758 kernel: iommu: Default domain type: Translated May 15 10:03:11.738765 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 10:03:11.738772 kernel: vgaarb: loaded May 15 10:03:11.738778 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:03:11.738786 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:03:11.738792 kernel: PTP clock support registered May 15 10:03:11.738799 kernel: Registered efivars operations May 15 10:03:11.738807 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 10:03:11.738813 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:03:11.738820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:03:11.738826 kernel: pnp: PnP ACPI init May 15 10:03:11.738899 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 10:03:11.738910 kernel: pnp: PnP ACPI: found 1 devices May 15 10:03:11.738916 kernel: NET: Registered PF_INET protocol family May 15 10:03:11.738923 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 10:03:11.738932 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 10:03:11.738938 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:03:11.738945 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:03:11.738952 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 10:03:11.738958 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 10:03:11.738965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:03:11.738972 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:03:11.738978 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:03:11.738985 kernel: PCI: CLS 0 bytes, default 64 May 15 10:03:11.738993 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 10:03:11.738999 kernel: kvm [1]: HYP mode not available May 15 10:03:11.739006 kernel: Initialise system trusted keyrings May 15 10:03:11.739012 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 10:03:11.739019 kernel: Key type asymmetric registered May 15 10:03:11.739025 kernel: Asymmetric key parser 'x509' registered May 15 10:03:11.739032 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:03:11.739038 kernel: io scheduler mq-deadline registered May 15 10:03:11.739045 kernel: io scheduler kyber registered May 15 10:03:11.739053 kernel: io scheduler bfq registered May 15 10:03:11.739060 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 10:03:11.739066 kernel: ACPI: button: Power Button [PWRB] May 15 10:03:11.739073 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 10:03:11.739134 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 10:03:11.739143 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:03:11.739150 kernel: thunder_xcv, ver 1.0 May 15 10:03:11.739157 kernel: thunder_bgx, ver 1.0 May 15 10:03:11.739163 kernel: nicpf, ver 1.0 May 15 10:03:11.739171 kernel: nicvf, ver 1.0 May 15 10:03:11.739268 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 10:03:11.739329 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T10:03:11 UTC (1747303391) May 15 10:03:11.739338 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 10:03:11.739345 kernel: NET: Registered PF_INET6 protocol family May 15 10:03:11.739351 kernel: Segment Routing with IPv6 May 15 10:03:11.739358 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:03:11.739364 kernel: NET: Registered PF_PACKET protocol family May 15 10:03:11.739373 kernel: Key type dns_resolver registered May 15 10:03:11.739379 kernel: registered taskstats version 1 May 15 10:03:11.739386 kernel: Loading compiled-in X.509 certificates May 15 10:03:11.739393 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 3679cbfb4d4756a2ddc177f0eaedea33fb5fdf2e' May 15 10:03:11.739399 kernel: Key type .fscrypt registered May 15 10:03:11.739406 kernel: Key type fscrypt-provisioning registered May 15 10:03:11.739412 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:03:11.739419 kernel: ima: Allocated hash algorithm: sha1 May 15 10:03:11.739425 kernel: ima: No architecture policies found May 15 10:03:11.739433 kernel: clk: Disabling unused clocks May 15 10:03:11.739439 kernel: Freeing unused kernel memory: 36416K May 15 10:03:11.739446 kernel: Run /init as init process May 15 10:03:11.739452 kernel: with arguments: May 15 10:03:11.739459 kernel: /init May 15 10:03:11.739465 kernel: with environment: May 15 10:03:11.739471 kernel: HOME=/ May 15 10:03:11.739478 kernel: TERM=linux May 15 10:03:11.739484 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:03:11.739494 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:03:11.739502 systemd[1]: Detected virtualization kvm. May 15 10:03:11.739509 systemd[1]: Detected architecture arm64. May 15 10:03:11.739516 systemd[1]: Running in initrd. May 15 10:03:11.739523 systemd[1]: No hostname configured, using default hostname. May 15 10:03:11.739530 systemd[1]: Hostname set to . May 15 10:03:11.739537 systemd[1]: Initializing machine ID from VM UUID. May 15 10:03:11.739545 systemd[1]: Queued start job for default target initrd.target. May 15 10:03:11.739552 systemd[1]: Started systemd-ask-password-console.path. May 15 10:03:11.739559 systemd[1]: Reached target cryptsetup.target. May 15 10:03:11.739566 systemd[1]: Reached target paths.target. May 15 10:03:11.739573 systemd[1]: Reached target slices.target. May 15 10:03:11.739580 systemd[1]: Reached target swap.target. May 15 10:03:11.739586 systemd[1]: Reached target timers.target. May 15 10:03:11.739594 systemd[1]: Listening on iscsid.socket. May 15 10:03:11.739602 systemd[1]: Listening on iscsiuio.socket. May 15 10:03:11.739609 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:03:11.739616 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:03:11.739623 systemd[1]: Listening on systemd-journald.socket. May 15 10:03:11.739631 systemd[1]: Listening on systemd-networkd.socket. May 15 10:03:11.739638 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:03:11.739645 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:03:11.739652 systemd[1]: Reached target sockets.target. May 15 10:03:11.739660 systemd[1]: Starting kmod-static-nodes.service... May 15 10:03:11.739667 systemd[1]: Finished network-cleanup.service. May 15 10:03:11.739674 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:03:11.739681 systemd[1]: Starting systemd-journald.service... May 15 10:03:11.739688 systemd[1]: Starting systemd-modules-load.service... May 15 10:03:11.739695 systemd[1]: Starting systemd-resolved.service... May 15 10:03:11.739702 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:03:11.739709 systemd[1]: Finished kmod-static-nodes.service. May 15 10:03:11.739716 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:03:11.739724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:03:11.739731 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:03:11.739738 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:03:11.739745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:03:11.739752 kernel: audit: type=1130 audit(1747303391.737:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.739762 systemd-journald[290]: Journal started May 15 10:03:11.739801 systemd-journald[290]: Runtime Journal (/run/log/journal/7babb9ca9fed4a3f96a8daf4b557a49f) is 6.0M, max 48.7M, 42.6M free. May 15 10:03:11.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.728553 systemd-modules-load[291]: Inserted module 'overlay' May 15 10:03:11.743755 systemd[1]: Started systemd-journald.service. May 15 10:03:11.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.747243 kernel: audit: type=1130 audit(1747303391.743:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.752622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:03:11.752190 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:03:11.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.753704 systemd[1]: Starting dracut-cmdline.service... May 15 10:03:11.758479 kernel: audit: type=1130 audit(1747303391.750:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.758498 kernel: Bridge firewalling registered May 15 10:03:11.757264 systemd-modules-load[291]: Inserted module 'br_netfilter' May 15 10:03:11.757642 systemd-resolved[292]: Positive Trust Anchors: May 15 10:03:11.757649 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:03:11.757677 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:03:11.762320 systemd-resolved[292]: Defaulting to hostname 'linux'. May 15 10:03:11.769236 kernel: audit: type=1130 audit(1747303391.766:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.763055 systemd[1]: Started systemd-resolved.service. May 15 10:03:11.766335 systemd[1]: Reached target nss-lookup.target. May 15 10:03:11.771522 dracut-cmdline[308]: dracut-dracut-053 May 15 10:03:11.772216 kernel: SCSI subsystem initialized May 15 10:03:11.772815 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:03:11.781015 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:03:11.781059 kernel: device-mapper: uevent: version 1.0.3 May 15 10:03:11.781069 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:03:11.783613 systemd-modules-load[291]: Inserted module 'dm_multipath' May 15 10:03:11.784413 systemd[1]: Finished systemd-modules-load.service. May 15 10:03:11.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.785802 systemd[1]: Starting systemd-sysctl.service... May 15 10:03:11.788912 kernel: audit: type=1130 audit(1747303391.784:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.794418 systemd[1]: Finished systemd-sysctl.service. May 15 10:03:11.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.798241 kernel: audit: type=1130 audit(1747303391.794:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.835224 kernel: Loading iSCSI transport class v2.0-870. May 15 10:03:11.847228 kernel: iscsi: registered transport (tcp) May 15 10:03:11.864230 kernel: iscsi: registered transport (qla4xxx) May 15 10:03:11.864270 kernel: QLogic iSCSI HBA Driver May 15 10:03:11.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.897828 systemd[1]: Finished dracut-cmdline.service. May 15 10:03:11.901839 kernel: audit: type=1130 audit(1747303391.898:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:11.899351 systemd[1]: Starting dracut-pre-udev.service... May 15 10:03:11.943237 kernel: raid6: neonx8 gen() 13694 MB/s May 15 10:03:11.960231 kernel: raid6: neonx8 xor() 10730 MB/s May 15 10:03:11.977231 kernel: raid6: neonx4 gen() 13460 MB/s May 15 10:03:11.994223 kernel: raid6: neonx4 xor() 11065 MB/s May 15 10:03:12.011224 kernel: raid6: neonx2 gen() 12899 MB/s May 15 10:03:12.028220 kernel: raid6: neonx2 xor() 10231 MB/s May 15 10:03:12.045225 kernel: raid6: neonx1 gen() 10567 MB/s May 15 10:03:12.062222 kernel: raid6: neonx1 xor() 8769 MB/s May 15 10:03:12.079225 kernel: raid6: int64x8 gen() 6240 MB/s May 15 10:03:12.096225 kernel: raid6: int64x8 xor() 3536 MB/s May 15 10:03:12.113215 kernel: raid6: int64x4 gen() 7211 MB/s May 15 10:03:12.130224 kernel: raid6: int64x4 xor() 3848 MB/s May 15 10:03:12.147222 kernel: raid6: int64x2 gen() 6146 MB/s May 15 10:03:12.164221 kernel: raid6: int64x2 xor() 3316 MB/s May 15 10:03:12.181226 kernel: raid6: int64x1 gen() 5040 MB/s May 15 10:03:12.198309 kernel: raid6: int64x1 xor() 2642 MB/s May 15 10:03:12.198321 kernel: raid6: using algorithm neonx8 gen() 13694 MB/s May 15 10:03:12.198330 kernel: raid6: .... xor() 10730 MB/s, rmw enabled May 15 10:03:12.199393 kernel: raid6: using neon recovery algorithm May 15 10:03:12.209228 kernel: xor: measuring software checksum speed May 15 10:03:12.210522 kernel: 8regs : 14618 MB/sec May 15 10:03:12.210545 kernel: 32regs : 20702 MB/sec May 15 10:03:12.211748 kernel: arm64_neon : 27598 MB/sec May 15 10:03:12.211760 kernel: xor: using function: arm64_neon (27598 MB/sec) May 15 10:03:12.263232 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 15 10:03:12.273814 systemd[1]: Finished dracut-pre-udev.service. May 15 10:03:12.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:12.275421 systemd[1]: Starting systemd-udevd.service... May 15 10:03:12.279497 kernel: audit: type=1130 audit(1747303392.274:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:12.279518 kernel: audit: type=1334 audit(1747303392.274:10): prog-id=7 op=LOAD May 15 10:03:12.274000 audit: BPF prog-id=7 op=LOAD May 15 10:03:12.274000 audit: BPF prog-id=8 op=LOAD May 15 10:03:12.289139 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 15 10:03:12.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:12.292502 systemd[1]: Started systemd-udevd.service. May 15 10:03:12.293936 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:03:12.305598 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation May 15 10:03:12.330898 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:03:12.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:12.332398 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:03:12.365714 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:03:12.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:12.394222 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 10:03:12.399430 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 10:03:12.399452 kernel: GPT:9289727 != 19775487 May 15 10:03:12.399461 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 10:03:12.399470 kernel: GPT:9289727 != 19775487 May 15 10:03:12.399479 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 10:03:12.399487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:03:12.415218 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) May 15 10:03:12.416853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:03:12.425948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:03:12.426867 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:03:12.430784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:03:12.434061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:03:12.435826 systemd[1]: Starting disk-uuid.service... May 15 10:03:12.441910 disk-uuid[562]: Primary Header is updated. May 15 10:03:12.441910 disk-uuid[562]: Secondary Entries is updated. May 15 10:03:12.441910 disk-uuid[562]: Secondary Header is updated. May 15 10:03:12.446226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:03:13.457230 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:03:13.457277 disk-uuid[563]: The operation has completed successfully. May 15 10:03:13.485167 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:03:13.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.485287 systemd[1]: Finished disk-uuid.service. May 15 10:03:13.486910 systemd[1]: Starting verity-setup.service... May 15 10:03:13.514477 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 10:03:13.548787 systemd[1]: Found device dev-mapper-usr.device. May 15 10:03:13.551561 systemd[1]: Mounting sysusr-usr.mount... May 15 10:03:13.554096 systemd[1]: Finished verity-setup.service. May 15 10:03:13.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.606101 systemd[1]: Mounted sysusr-usr.mount. May 15 10:03:13.607315 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:03:13.606900 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 10:03:13.607746 systemd[1]: Starting ignition-setup.service... May 15 10:03:13.609661 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:03:13.617685 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:03:13.617729 kernel: BTRFS info (device vda6): using free space tree May 15 10:03:13.617739 kernel: BTRFS info (device vda6): has skinny extents May 15 10:03:13.626370 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:03:13.631966 systemd[1]: Finished ignition-setup.service. May 15 10:03:13.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.633600 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:03:13.702074 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:03:13.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.703000 audit: BPF prog-id=9 op=LOAD May 15 10:03:13.704036 systemd[1]: Starting systemd-networkd.service... May 15 10:03:13.728149 systemd-networkd[739]: lo: Link UP May 15 10:03:13.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.728944 systemd-networkd[739]: lo: Gained carrier May 15 10:03:13.729396 systemd-networkd[739]: Enumeration completed May 15 10:03:13.729495 systemd[1]: Started systemd-networkd.service. May 15 10:03:13.729567 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:03:13.730320 systemd[1]: Reached target network.target. May 15 10:03:13.731707 systemd[1]: Starting iscsiuio.service... May 15 10:03:13.736544 systemd-networkd[739]: eth0: Link UP May 15 10:03:13.736548 systemd-networkd[739]: eth0: Gained carrier May 15 10:03:13.739035 ignition[647]: Ignition 2.14.0 May 15 10:03:13.739042 ignition[647]: Stage: fetch-offline May 15 10:03:13.739080 ignition[647]: no configs at "/usr/lib/ignition/base.d" May 15 10:03:13.739090 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:03:13.739296 ignition[647]: parsed url from cmdline: "" May 15 10:03:13.742362 systemd[1]: Started iscsiuio.service. May 15 10:03:13.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.739299 ignition[647]: no config URL provided May 15 10:03:13.739304 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:03:13.744396 systemd[1]: Starting iscsid.service... May 15 10:03:13.739311 ignition[647]: no config at "/usr/lib/ignition/user.ign" May 15 10:03:13.739329 ignition[647]: op(1): [started] loading QEMU firmware config module May 15 10:03:13.739334 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 10:03:13.747305 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:03:13.747266 ignition[647]: op(1): [finished] loading QEMU firmware config module May 15 10:03:13.750042 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:03:13.750042 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:03:13.750042 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:03:13.750042 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:03:13.750042 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:03:13.750042 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:03:13.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.753150 systemd[1]: Started iscsid.service. May 15 10:03:13.758144 systemd[1]: Starting dracut-initqueue.service... May 15 10:03:13.769506 systemd[1]: Finished dracut-initqueue.service. May 15 10:03:13.770373 systemd[1]: Reached target remote-fs-pre.target. May 15 10:03:13.771677 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:03:13.773159 systemd[1]: Reached target remote-fs.target. May 15 10:03:13.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.775423 systemd[1]: Starting dracut-pre-mount.service... May 15 10:03:13.783872 systemd[1]: Finished dracut-pre-mount.service. May 15 10:03:13.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.802687 ignition[647]: parsing config with SHA512: bd6e59fd225ae019f5eeecc135a26aeaeaffbb85ee29f969e5febbd5ca27b1f71ee34ecfecfab20646de613c58b4bb15faff5d18d751640f49e8a75b33865b17 May 15 10:03:13.815471 unknown[647]: fetched base config from "system" May 15 10:03:13.815481 unknown[647]: fetched user config from "qemu" May 15 10:03:13.815976 ignition[647]: fetch-offline: fetch-offline passed May 15 10:03:13.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.816983 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:03:13.816030 ignition[647]: Ignition finished successfully May 15 10:03:13.818303 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:03:13.819137 systemd[1]: Starting ignition-kargs.service... May 15 10:03:13.827914 ignition[760]: Ignition 2.14.0 May 15 10:03:13.827925 ignition[760]: Stage: kargs May 15 10:03:13.828021 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 15 10:03:13.828031 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:03:13.828908 ignition[760]: kargs: kargs passed May 15 10:03:13.830567 systemd[1]: Finished ignition-kargs.service. May 15 10:03:13.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.828952 ignition[760]: Ignition finished successfully May 15 10:03:13.832257 systemd[1]: Starting ignition-disks.service... May 15 10:03:13.839476 ignition[766]: Ignition 2.14.0 May 15 10:03:13.839487 ignition[766]: Stage: disks May 15 10:03:13.839629 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 15 10:03:13.839644 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:03:13.842023 systemd[1]: Finished ignition-disks.service. May 15 10:03:13.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.840665 ignition[766]: disks: disks passed May 15 10:03:13.843338 systemd[1]: Reached target initrd-root-device.target. May 15 10:03:13.840713 ignition[766]: Ignition finished successfully May 15 10:03:13.844304 systemd[1]: Reached target local-fs-pre.target. May 15 10:03:13.845256 systemd[1]: Reached target local-fs.target. May 15 10:03:13.846279 systemd[1]: Reached target sysinit.target. May 15 10:03:13.847238 systemd[1]: Reached target basic.target. May 15 10:03:13.849062 systemd[1]: Starting systemd-fsck-root.service... May 15 10:03:13.862908 systemd-fsck[774]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 15 10:03:13.866971 systemd[1]: Finished systemd-fsck-root.service. May 15 10:03:13.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.869307 systemd[1]: Mounting sysroot.mount... May 15 10:03:13.875229 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:03:13.875280 systemd[1]: Mounted sysroot.mount. May 15 10:03:13.875950 systemd[1]: Reached target initrd-root-fs.target. May 15 10:03:13.878007 systemd[1]: Mounting sysroot-usr.mount... May 15 10:03:13.878821 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:03:13.878861 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:03:13.878883 systemd[1]: Reached target ignition-diskful.target. May 15 10:03:13.881167 systemd[1]: Mounted sysroot-usr.mount. May 15 10:03:13.883377 systemd[1]: Starting initrd-setup-root.service... May 15 10:03:13.887947 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:03:13.892590 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 15 10:03:13.896690 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:03:13.900258 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:03:13.931028 systemd[1]: Finished initrd-setup-root.service. May 15 10:03:13.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.932510 systemd[1]: Starting ignition-mount.service... May 15 10:03:13.933720 systemd[1]: Starting sysroot-boot.service... May 15 10:03:13.939237 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:03:13.950153 ignition[827]: INFO : Ignition 2.14.0 May 15 10:03:13.950153 ignition[827]: INFO : Stage: mount May 15 10:03:13.950153 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:03:13.950153 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:03:13.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:13.953660 ignition[827]: INFO : mount: mount passed May 15 10:03:13.953660 ignition[827]: INFO : Ignition finished successfully May 15 10:03:13.950936 systemd[1]: Finished ignition-mount.service. May 15 10:03:13.962835 systemd[1]: Finished sysroot-boot.service. May 15 10:03:13.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:14.561078 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:03:14.568945 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) May 15 10:03:14.568982 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:03:14.568992 kernel: BTRFS info (device vda6): using free space tree May 15 10:03:14.569624 kernel: BTRFS info (device vda6): has skinny extents May 15 10:03:14.573391 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:03:14.574830 systemd[1]: Starting ignition-files.service... May 15 10:03:14.588668 ignition[855]: INFO : Ignition 2.14.0 May 15 10:03:14.588668 ignition[855]: INFO : Stage: files May 15 10:03:14.589908 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:03:14.589908 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:03:14.589908 ignition[855]: DEBUG : files: compiled without relabeling support, skipping May 15 10:03:14.597437 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:03:14.597437 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:03:14.599956 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:03:14.599956 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:03:14.602046 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:03:14.602046 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:03:14.602046 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:03:14.602046 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 10:03:14.602046 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 10:03:14.600122 unknown[855]: wrote ssh authorized keys file for user: core May 15 10:03:14.679296 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 10:03:14.880984 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 10:03:14.882595 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:03:14.882595 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 10:03:15.214371 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 15 10:03:15.289241 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:03:15.290633 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 15 10:03:15.358416 systemd-networkd[739]: eth0: Gained IPv6LL May 15 10:03:15.561070 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 15 10:03:15.945380 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 10:03:15.945380 ignition[855]: INFO : files: op(d): [started] processing unit "containerd.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:03:15.948662 ignition[855]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:03:15.948662 ignition[855]: INFO : files: op(d): [finished] processing unit "containerd.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:03:15.948662 ignition[855]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:03:15.979855 ignition[855]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:03:15.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:15.983111 ignition[855]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:03:15.983111 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:03:15.983111 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:03:15.983111 ignition[855]: INFO : files: files passed May 15 10:03:15.983111 ignition[855]: INFO : Ignition finished successfully May 15 10:03:15.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:15.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:15.982058 systemd[1]: Finished ignition-files.service. May 15 10:03:15.983843 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:03:15.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:15.984868 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:03:15.995925 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 10:03:15.985645 systemd[1]: Starting ignition-quench.service... May 15 10:03:15.998288 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:03:15.988553 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:03:15.988636 systemd[1]: Finished ignition-quench.service. May 15 10:03:15.991680 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:03:15.993027 systemd[1]: Reached target ignition-complete.target. May 15 10:03:15.995409 systemd[1]: Starting initrd-parse-etc.service... May 15 10:03:16.008176 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:03:16.008331 systemd[1]: Finished initrd-parse-etc.service. May 15 10:03:16.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.009606 systemd[1]: Reached target initrd-fs.target. May 15 10:03:16.010734 systemd[1]: Reached target initrd.target. May 15 10:03:16.011838 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:03:16.012633 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:03:16.022738 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:03:16.026912 kernel: kauditd_printk_skb: 28 callbacks suppressed May 15 10:03:16.026934 kernel: audit: type=1130 audit(1747303396.023:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.024147 systemd[1]: Starting initrd-cleanup.service... May 15 10:03:16.032493 systemd[1]: Stopped target nss-lookup.target. May 15 10:03:16.033937 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:03:16.035423 systemd[1]: Stopped target timers.target. May 15 10:03:16.036072 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:03:16.036182 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:03:16.037318 systemd[1]: Stopped target initrd.target. May 15 10:03:16.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.038406 systemd[1]: Stopped target basic.target. May 15 10:03:16.043416 kernel: audit: type=1131 audit(1747303396.036:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.041720 systemd[1]: Stopped target ignition-complete.target. May 15 10:03:16.042954 systemd[1]: Stopped target ignition-diskful.target. May 15 10:03:16.044976 systemd[1]: Stopped target initrd-root-device.target. May 15 10:03:16.046773 systemd[1]: Stopped target remote-fs.target. May 15 10:03:16.047810 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:03:16.048924 systemd[1]: Stopped target sysinit.target. May 15 10:03:16.049936 systemd[1]: Stopped target local-fs.target. May 15 10:03:16.050994 systemd[1]: Stopped target local-fs-pre.target. May 15 10:03:16.052035 systemd[1]: Stopped target swap.target. May 15 10:03:16.053097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:03:16.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.058216 kernel: audit: type=1131 audit(1747303396.053:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.053246 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:03:16.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.054295 systemd[1]: Stopped target cryptsetup.target. May 15 10:03:16.065803 kernel: audit: type=1131 audit(1747303396.058:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.065819 kernel: audit: type=1131 audit(1747303396.062:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.055176 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:03:16.055303 systemd[1]: Stopped dracut-initqueue.service. May 15 10:03:16.058889 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:03:16.058989 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:03:16.062969 systemd[1]: Stopped target paths.target. May 15 10:03:16.066397 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:03:16.070540 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:03:16.071306 systemd[1]: Stopped target slices.target. May 15 10:03:16.072463 systemd[1]: Stopped target sockets.target. May 15 10:03:16.073443 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:03:16.073518 systemd[1]: Closed iscsid.socket. May 15 10:03:16.074396 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:03:16.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.074463 systemd[1]: Closed iscsiuio.socket. May 15 10:03:16.082836 kernel: audit: type=1131 audit(1747303396.076:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.082854 kernel: audit: type=1131 audit(1747303396.079:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.075456 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:03:16.075555 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:03:16.076522 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:03:16.076615 systemd[1]: Stopped ignition-files.service. May 15 10:03:16.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.081034 systemd[1]: Stopping ignition-mount.service... May 15 10:03:16.090949 ignition[896]: INFO : Ignition 2.14.0 May 15 10:03:16.090949 ignition[896]: INFO : Stage: umount May 15 10:03:16.090949 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:03:16.090949 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:03:16.090949 ignition[896]: INFO : umount: umount passed May 15 10:03:16.090949 ignition[896]: INFO : Ignition finished successfully May 15 10:03:16.102723 kernel: audit: type=1131 audit(1747303396.085:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.102747 kernel: audit: type=1131 audit(1747303396.087:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.102757 kernel: audit: type=1130 audit(1747303396.094:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.084102 systemd[1]: Stopping sysroot-boot.service... May 15 10:03:16.084724 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:03:16.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.084890 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:03:16.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.086248 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:03:16.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.086388 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:03:16.091087 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:03:16.091175 systemd[1]: Finished initrd-cleanup.service. May 15 10:03:16.095108 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:03:16.095186 systemd[1]: Stopped ignition-mount.service. May 15 10:03:16.100002 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:03:16.100352 systemd[1]: Stopped target network.target. May 15 10:03:16.105625 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:03:16.105683 systemd[1]: Stopped ignition-disks.service. May 15 10:03:16.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.106661 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:03:16.106698 systemd[1]: Stopped ignition-kargs.service. May 15 10:03:16.107756 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:03:16.107790 systemd[1]: Stopped ignition-setup.service. May 15 10:03:16.108894 systemd[1]: Stopping systemd-networkd.service... May 15 10:03:16.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.110084 systemd[1]: Stopping systemd-resolved.service... May 15 10:03:16.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.116263 systemd-networkd[739]: eth0: DHCPv6 lease lost May 15 10:03:16.126000 audit: BPF prog-id=9 op=UNLOAD May 15 10:03:16.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.118252 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:03:16.118354 systemd[1]: Stopped systemd-networkd.service. May 15 10:03:16.119333 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:03:16.119365 systemd[1]: Closed systemd-networkd.socket. May 15 10:03:16.120874 systemd[1]: Stopping network-cleanup.service... May 15 10:03:16.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.122912 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:03:16.122968 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:03:16.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.124114 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:03:16.139000 audit: BPF prog-id=6 op=UNLOAD May 15 10:03:16.124150 systemd[1]: Stopped systemd-sysctl.service. May 15 10:03:16.126600 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:03:16.126647 systemd[1]: Stopped systemd-modules-load.service. May 15 10:03:16.127477 systemd[1]: Stopping systemd-udevd.service... May 15 10:03:16.132876 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:03:16.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.133370 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:03:16.133464 systemd[1]: Stopped systemd-resolved.service. May 15 10:03:16.137304 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:03:16.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.137415 systemd[1]: Stopped network-cleanup.service. May 15 10:03:16.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.141827 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:03:16.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.141948 systemd[1]: Stopped systemd-udevd.service. May 15 10:03:16.143402 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:03:16.143441 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:03:16.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.145211 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:03:16.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.145254 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:03:16.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.146640 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:03:16.146792 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:03:16.147806 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:03:16.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.147843 systemd[1]: Stopped dracut-cmdline.service. May 15 10:03:16.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.149067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:03:16.149104 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:03:16.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:16.151613 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:03:16.152571 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 10:03:16.152627 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 10:03:16.154507 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:03:16.154546 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:03:16.155234 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:03:16.155271 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:03:16.157305 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 10:03:16.157752 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:03:16.157840 systemd[1]: Stopped sysroot-boot.service. May 15 10:03:16.159109 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:03:16.159222 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:03:16.160320 systemd[1]: Reached target initrd-switch-root.target. May 15 10:03:16.171000 audit: BPF prog-id=5 op=UNLOAD May 15 10:03:16.171000 audit: BPF prog-id=4 op=UNLOAD May 15 10:03:16.171000 audit: BPF prog-id=3 op=UNLOAD May 15 10:03:16.161172 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:03:16.172000 audit: BPF prog-id=8 op=UNLOAD May 15 10:03:16.172000 audit: BPF prog-id=7 op=UNLOAD May 15 10:03:16.161249 systemd[1]: Stopped initrd-setup-root.service. May 15 10:03:16.163313 systemd[1]: Starting initrd-switch-root.service... May 15 10:03:16.169897 systemd[1]: Switching root. May 15 10:03:16.188325 iscsid[746]: iscsid shutting down. May 15 10:03:16.188934 systemd-journald[290]: Journal stopped May 15 10:03:18.315065 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). May 15 10:03:18.315121 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:03:18.315141 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:03:18.315154 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:03:18.315164 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:03:18.315174 kernel: SELinux: policy capability open_perms=1 May 15 10:03:18.315184 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:03:18.315237 kernel: SELinux: policy capability always_check_network=0 May 15 10:03:18.315249 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:03:18.315260 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:03:18.315270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:03:18.315281 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:03:18.315296 systemd[1]: Successfully loaded SELinux policy in 34.790ms. May 15 10:03:18.315315 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.056ms. May 15 10:03:18.315330 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:03:18.315345 systemd[1]: Detected virtualization kvm. May 15 10:03:18.315357 systemd[1]: Detected architecture arm64. May 15 10:03:18.315368 systemd[1]: Detected first boot. May 15 10:03:18.315380 systemd[1]: Initializing machine ID from VM UUID. May 15 10:03:18.315391 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:03:18.315402 systemd[1]: Populated /etc with preset unit settings. May 15 10:03:18.315414 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:03:18.315429 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:03:18.315442 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:03:18.315454 systemd[1]: Queued start job for default target multi-user.target. May 15 10:03:18.315467 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 10:03:18.315478 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:03:18.315489 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:03:18.315499 systemd[1]: Created slice system-getty.slice. May 15 10:03:18.315511 systemd[1]: Created slice system-modprobe.slice. May 15 10:03:18.315522 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:03:18.315533 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:03:18.315544 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:03:18.315555 systemd[1]: Created slice user.slice. May 15 10:03:18.315567 systemd[1]: Started systemd-ask-password-console.path. May 15 10:03:18.315578 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:03:18.315589 systemd[1]: Set up automount boot.automount. May 15 10:03:18.315600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:03:18.315611 systemd[1]: Reached target integritysetup.target. May 15 10:03:18.315621 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:03:18.315632 systemd[1]: Reached target remote-fs.target. May 15 10:03:18.315643 systemd[1]: Reached target slices.target. May 15 10:03:18.315655 systemd[1]: Reached target swap.target. May 15 10:03:18.315666 systemd[1]: Reached target torcx.target. May 15 10:03:18.315677 systemd[1]: Reached target veritysetup.target. May 15 10:03:18.315688 systemd[1]: Listening on systemd-coredump.socket. May 15 10:03:18.315699 systemd[1]: Listening on systemd-initctl.socket. May 15 10:03:18.315710 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:03:18.315722 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:03:18.315733 systemd[1]: Listening on systemd-journald.socket. May 15 10:03:18.315744 systemd[1]: Listening on systemd-networkd.socket. May 15 10:03:18.315755 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:03:18.315768 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:03:18.315779 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:03:18.315790 systemd[1]: Mounting dev-hugepages.mount... May 15 10:03:18.315801 systemd[1]: Mounting dev-mqueue.mount... May 15 10:03:18.315812 systemd[1]: Mounting media.mount... May 15 10:03:18.315823 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:03:18.315833 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:03:18.315844 systemd[1]: Mounting tmp.mount... May 15 10:03:18.315855 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:03:18.315868 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:03:18.315878 systemd[1]: Starting kmod-static-nodes.service... May 15 10:03:18.315889 systemd[1]: Starting modprobe@configfs.service... May 15 10:03:18.315900 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:03:18.315911 systemd[1]: Starting modprobe@drm.service... May 15 10:03:18.315922 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:03:18.315933 systemd[1]: Starting modprobe@fuse.service... May 15 10:03:18.315944 systemd[1]: Starting modprobe@loop.service... May 15 10:03:18.315955 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:03:18.315968 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 10:03:18.315979 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 15 10:03:18.315990 systemd[1]: Starting systemd-journald.service... May 15 10:03:18.316001 systemd[1]: Starting systemd-modules-load.service... May 15 10:03:18.316011 systemd[1]: Starting systemd-network-generator.service... May 15 10:03:18.316023 systemd[1]: Starting systemd-remount-fs.service... May 15 10:03:18.316033 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:03:18.316044 systemd[1]: Mounted dev-hugepages.mount. May 15 10:03:18.316056 systemd[1]: Mounted dev-mqueue.mount. May 15 10:03:18.316066 systemd[1]: Mounted media.mount. May 15 10:03:18.316077 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:03:18.316087 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:03:18.316097 systemd[1]: Mounted tmp.mount. May 15 10:03:18.316111 systemd-journald[1023]: Journal started May 15 10:03:18.316155 systemd-journald[1023]: Runtime Journal (/run/log/journal/7babb9ca9fed4a3f96a8daf4b557a49f) is 6.0M, max 48.7M, 42.6M free. May 15 10:03:18.310000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:03:18.310000 audit[1023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffda8834a0 a2=4000 a3=1 items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:03:18.310000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:03:18.318825 kernel: fuse: init (API version 7.34) May 15 10:03:18.320316 systemd[1]: Finished kmod-static-nodes.service. May 15 10:03:18.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.323386 systemd[1]: Started systemd-journald.service. May 15 10:03:18.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.326816 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:03:18.327006 systemd[1]: Finished modprobe@configfs.service. May 15 10:03:18.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.328167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:03:18.328548 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:03:18.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.329968 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:03:18.330171 systemd[1]: Finished modprobe@drm.service. May 15 10:03:18.330244 kernel: loop: module loaded May 15 10:03:18.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.331104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:03:18.331479 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:03:18.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.332472 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:03:18.332691 systemd[1]: Finished modprobe@fuse.service. May 15 10:03:18.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.333772 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:03:18.333998 systemd[1]: Finished modprobe@loop.service. May 15 10:03:18.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.335126 systemd[1]: Finished systemd-modules-load.service. May 15 10:03:18.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.336536 systemd[1]: Finished systemd-network-generator.service. May 15 10:03:18.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.337755 systemd[1]: Finished systemd-remount-fs.service. May 15 10:03:18.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.339000 systemd[1]: Reached target network-pre.target. May 15 10:03:18.341047 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:03:18.343075 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:03:18.343824 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:03:18.345612 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:03:18.347684 systemd[1]: Starting systemd-journal-flush.service... May 15 10:03:18.348412 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:03:18.349619 systemd[1]: Starting systemd-random-seed.service... May 15 10:03:18.350450 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:03:18.351696 systemd[1]: Starting systemd-sysctl.service... May 15 10:03:18.355460 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:03:18.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.361895 systemd-journald[1023]: Time spent on flushing to /var/log/journal/7babb9ca9fed4a3f96a8daf4b557a49f is 11.417ms for 930 entries. May 15 10:03:18.361895 systemd-journald[1023]: System Journal (/var/log/journal/7babb9ca9fed4a3f96a8daf4b557a49f) is 8.0M, max 195.6M, 187.6M free. May 15 10:03:18.377488 systemd-journald[1023]: Received client request to flush runtime journal. May 15 10:03:18.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.356440 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:03:18.357399 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:03:18.359571 systemd[1]: Starting systemd-sysusers.service... May 15 10:03:18.371571 systemd[1]: Finished systemd-random-seed.service. May 15 10:03:18.372614 systemd[1]: Reached target first-boot-complete.target. May 15 10:03:18.377117 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:03:18.378230 systemd[1]: Finished systemd-sysctl.service. May 15 10:03:18.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.379273 systemd[1]: Finished systemd-journal-flush.service. May 15 10:03:18.381535 systemd[1]: Starting systemd-udev-settle.service... May 15 10:03:18.393232 systemd[1]: Finished systemd-sysusers.service. May 15 10:03:18.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.395409 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:03:18.396565 udevadm[1086]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 10:03:18.418794 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:03:18.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.736771 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:03:18.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.738939 systemd[1]: Starting systemd-udevd.service... May 15 10:03:18.758734 systemd-udevd[1092]: Using default interface naming scheme 'v252'. May 15 10:03:18.779136 systemd[1]: Started systemd-udevd.service. May 15 10:03:18.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.781742 systemd[1]: Starting systemd-networkd.service... May 15 10:03:18.791777 systemd[1]: Starting systemd-userdbd.service... May 15 10:03:18.798437 systemd[1]: Found device dev-ttyAMA0.device. May 15 10:03:18.842489 systemd[1]: Started systemd-userdbd.service. May 15 10:03:18.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.855536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:03:18.890151 systemd[1]: Finished systemd-udev-settle.service. May 15 10:03:18.892234 systemd[1]: Starting lvm2-activation-early.service... May 15 10:03:18.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.902827 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:03:18.912753 systemd-networkd[1101]: lo: Link UP May 15 10:03:18.912762 systemd-networkd[1101]: lo: Gained carrier May 15 10:03:18.913098 systemd-networkd[1101]: Enumeration completed May 15 10:03:18.913284 systemd[1]: Started systemd-networkd.service. May 15 10:03:18.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.914175 systemd-networkd[1101]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:03:18.917098 systemd-networkd[1101]: eth0: Link UP May 15 10:03:18.917110 systemd-networkd[1101]: eth0: Gained carrier May 15 10:03:18.922110 systemd[1]: Finished lvm2-activation-early.service. May 15 10:03:18.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.922986 systemd[1]: Reached target cryptsetup.target. May 15 10:03:18.924982 systemd[1]: Starting lvm2-activation.service... May 15 10:03:18.928746 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:03:18.933338 systemd-networkd[1101]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:03:18.966237 systemd[1]: Finished lvm2-activation.service. May 15 10:03:18.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.967086 systemd[1]: Reached target local-fs-pre.target. May 15 10:03:18.967900 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:03:18.967931 systemd[1]: Reached target local-fs.target. May 15 10:03:18.968617 systemd[1]: Reached target machines.target. May 15 10:03:18.970641 systemd[1]: Starting ldconfig.service... May 15 10:03:18.971752 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:03:18.971807 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:03:18.973011 systemd[1]: Starting systemd-boot-update.service... May 15 10:03:18.975002 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:03:18.977346 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:03:18.979468 systemd[1]: Starting systemd-sysext.service... May 15 10:03:18.980756 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1131 (bootctl) May 15 10:03:18.982492 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:03:18.993166 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:03:18.994441 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:03:18.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:18.996978 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:03:18.997272 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:03:19.013244 kernel: loop0: detected capacity change from 0 to 194096 May 15 10:03:19.060487 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:03:19.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.067231 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:03:19.077590 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31) May 15 10:03:19.077590 systemd-fsck[1143]: /dev/vda1: 236 files, 117182/258078 clusters May 15 10:03:19.079901 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:03:19.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.088270 kernel: loop1: detected capacity change from 0 to 194096 May 15 10:03:19.094740 (sd-sysext)[1149]: Using extensions 'kubernetes'. May 15 10:03:19.095098 (sd-sysext)[1149]: Merged extensions into '/usr'. May 15 10:03:19.114131 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:03:19.116044 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:03:19.118304 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:03:19.120355 systemd[1]: Starting modprobe@loop.service... May 15 10:03:19.121223 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:03:19.121424 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:03:19.122507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:03:19.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.122692 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:03:19.123880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:03:19.124034 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:03:19.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.125393 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:03:19.125598 systemd[1]: Finished modprobe@loop.service. May 15 10:03:19.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.126757 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:03:19.126866 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:03:19.193619 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:03:19.197907 systemd[1]: Finished ldconfig.service. May 15 10:03:19.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.297238 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:03:19.299170 systemd[1]: Mounting boot.mount... May 15 10:03:19.301029 systemd[1]: Mounting usr-share-oem.mount... May 15 10:03:19.308135 systemd[1]: Mounted boot.mount. May 15 10:03:19.309007 systemd[1]: Mounted usr-share-oem.mount. May 15 10:03:19.311066 systemd[1]: Finished systemd-sysext.service. May 15 10:03:19.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.313550 systemd[1]: Starting ensure-sysext.service... May 15 10:03:19.315972 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:03:19.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.319602 systemd[1]: Finished systemd-boot-update.service. May 15 10:03:19.322105 systemd[1]: Reloading. May 15 10:03:19.325755 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:03:19.326932 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:03:19.328305 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:03:19.356546 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-05-15T10:03:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:03:19.356574 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-05-15T10:03:19Z" level=info msg="torcx already run" May 15 10:03:19.435761 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:03:19.435782 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:03:19.455615 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:03:19.503305 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:03:19.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.507528 systemd[1]: Starting audit-rules.service... May 15 10:03:19.509562 systemd[1]: Starting clean-ca-certificates.service... May 15 10:03:19.512042 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:03:19.515464 systemd[1]: Starting systemd-resolved.service... May 15 10:03:19.518389 systemd[1]: Starting systemd-timesyncd.service... May 15 10:03:19.520995 systemd[1]: Starting systemd-update-utmp.service... May 15 10:03:19.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.523085 systemd[1]: Finished clean-ca-certificates.service. May 15 10:03:19.526657 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:03:19.526000 audit[1240]: SYSTEM_BOOT pid=1240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:03:19.531022 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:03:19.533175 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:03:19.536086 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:03:19.539919 systemd[1]: Starting modprobe@loop.service... May 15 10:03:19.540749 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:03:19.541089 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:03:19.541371 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:03:19.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.544711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:03:19.544914 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:03:19.546629 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:03:19.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.548275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:03:19.548445 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:03:19.549755 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:03:19.549935 systemd[1]: Finished modprobe@loop.service. May 15 10:03:19.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.552006 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:03:19.552173 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:03:19.554071 systemd[1]: Starting systemd-update-done.service... May 15 10:03:19.557167 systemd[1]: Finished systemd-update-utmp.service. May 15 10:03:19.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.562939 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:03:19.565078 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:03:19.568095 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:03:19.570657 systemd[1]: Starting modprobe@loop.service... May 15 10:03:19.571445 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:03:19.571629 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:03:19.571765 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:03:19.573079 systemd[1]: Finished systemd-update-done.service. May 15 10:03:19.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.574631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:03:19.574795 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:03:19.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.576017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:03:19.576175 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:03:19.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.577723 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:03:19.577903 systemd[1]: Finished modprobe@loop.service. May 15 10:03:19.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.580142 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:03:19.580407 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:03:19.583098 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:03:19.585468 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:03:19.587506 systemd[1]: Starting modprobe@drm.service... May 15 10:03:19.590180 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:03:19.592519 systemd[1]: Starting modprobe@loop.service... May 15 10:03:19.593452 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:03:19.593637 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:03:19.595257 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:03:19.596578 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:03:19.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.598119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:03:19.598336 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:03:19.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.599877 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:03:19.600024 systemd[1]: Finished modprobe@drm.service. May 15 10:03:19.601409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:03:19.601567 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:03:19.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.602834 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:03:19.603091 systemd[1]: Finished modprobe@loop.service. May 15 10:03:19.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.604876 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:03:19.604990 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:03:19.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.606727 systemd[1]: Finished ensure-sysext.service. May 15 10:03:19.621766 systemd[1]: Started systemd-timesyncd.service. May 15 10:03:19.622453 systemd-timesyncd[1239]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 10:03:19.622758 systemd-timesyncd[1239]: Initial clock synchronization to Thu 2025-05-15 10:03:19.506714 UTC. May 15 10:03:19.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:03:19.623751 systemd[1]: Reached target time-set.target. May 15 10:03:19.632105 augenrules[1284]: No rules May 15 10:03:19.633365 systemd[1]: Finished audit-rules.service. May 15 10:03:19.631000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:03:19.631000 audit[1284]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffced91260 a2=420 a3=0 items=0 ppid=1233 pid=1284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:03:19.631000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:03:19.633864 systemd-resolved[1238]: Positive Trust Anchors: May 15 10:03:19.633872 systemd-resolved[1238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:03:19.633899 systemd-resolved[1238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:03:19.646355 systemd-resolved[1238]: Defaulting to hostname 'linux'. May 15 10:03:19.647699 systemd[1]: Started systemd-resolved.service. May 15 10:03:19.648400 systemd[1]: Reached target network.target. May 15 10:03:19.648952 systemd[1]: Reached target nss-lookup.target. May 15 10:03:19.649578 systemd[1]: Reached target sysinit.target. May 15 10:03:19.650197 systemd[1]: Started motdgen.path. May 15 10:03:19.650737 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:03:19.651736 systemd[1]: Started logrotate.timer. May 15 10:03:19.652398 systemd[1]: Started mdadm.timer. May 15 10:03:19.652906 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:03:19.653555 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:03:19.653583 systemd[1]: Reached target paths.target. May 15 10:03:19.654101 systemd[1]: Reached target timers.target. May 15 10:03:19.654979 systemd[1]: Listening on dbus.socket. May 15 10:03:19.656815 systemd[1]: Starting docker.socket... May 15 10:03:19.658609 systemd[1]: Listening on sshd.socket. May 15 10:03:19.659330 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:03:19.659669 systemd[1]: Listening on docker.socket. May 15 10:03:19.660304 systemd[1]: Reached target sockets.target. May 15 10:03:19.660884 systemd[1]: Reached target basic.target. May 15 10:03:19.661617 systemd[1]: System is tainted: cgroupsv1 May 15 10:03:19.661665 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:03:19.661691 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:03:19.662723 systemd[1]: Starting containerd.service... May 15 10:03:19.664433 systemd[1]: Starting dbus.service... May 15 10:03:19.666196 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:03:19.668140 systemd[1]: Starting extend-filesystems.service... May 15 10:03:19.668996 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:03:19.670451 systemd[1]: Starting motdgen.service... May 15 10:03:19.672499 systemd[1]: Starting prepare-helm.service... May 15 10:03:19.674630 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:03:19.677217 systemd[1]: Starting sshd-keygen.service... May 15 10:03:19.681521 systemd[1]: Starting systemd-logind.service... May 15 10:03:19.682234 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:03:19.682354 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:03:19.683737 systemd[1]: Starting update-engine.service... May 15 10:03:19.695138 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:03:19.698230 jq[1311]: true May 15 10:03:19.698667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:03:19.700955 jq[1295]: false May 15 10:03:19.698935 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:03:19.704303 extend-filesystems[1296]: Found loop1 May 15 10:03:19.705493 extend-filesystems[1296]: Found vda May 15 10:03:19.706472 extend-filesystems[1296]: Found vda1 May 15 10:03:19.709669 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:03:19.709949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:03:19.725360 extend-filesystems[1296]: Found vda2 May 15 10:03:19.726028 extend-filesystems[1296]: Found vda3 May 15 10:03:19.726658 extend-filesystems[1296]: Found usr May 15 10:03:19.727380 extend-filesystems[1296]: Found vda4 May 15 10:03:19.731959 jq[1323]: true May 15 10:03:19.732125 tar[1313]: linux-arm64/helm May 15 10:03:19.729040 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:03:19.729358 systemd[1]: Finished motdgen.service. May 15 10:03:19.733380 extend-filesystems[1296]: Found vda6 May 15 10:03:19.734447 extend-filesystems[1296]: Found vda7 May 15 10:03:19.736131 extend-filesystems[1296]: Found vda9 May 15 10:03:19.737315 extend-filesystems[1296]: Checking size of /dev/vda9 May 15 10:03:19.740880 dbus-daemon[1294]: [system] SELinux support is enabled May 15 10:03:19.741065 systemd[1]: Started dbus.service. May 15 10:03:19.743537 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:03:19.743566 systemd[1]: Reached target system-config.target. May 15 10:03:19.744315 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:03:19.744330 systemd[1]: Reached target user-config.target. May 15 10:03:19.762029 extend-filesystems[1296]: Resized partition /dev/vda9 May 15 10:03:19.784464 extend-filesystems[1352]: resize2fs 1.46.5 (30-Dec-2021) May 15 10:03:19.797054 bash[1350]: Updated "/home/core/.ssh/authorized_keys" May 15 10:03:19.797381 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:03:19.803321 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 10:03:19.816900 systemd-logind[1306]: Watching system buttons on /dev/input/event0 (Power Button) May 15 10:03:19.817150 systemd-logind[1306]: New seat seat0. May 15 10:03:19.819671 systemd[1]: Started systemd-logind.service. May 15 10:03:19.837174 update_engine[1309]: I0515 10:03:19.836901 1309 main.cc:92] Flatcar Update Engine starting May 15 10:03:19.838224 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 10:03:19.847127 extend-filesystems[1352]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 10:03:19.847127 extend-filesystems[1352]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 10:03:19.847127 extend-filesystems[1352]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 10:03:19.851243 extend-filesystems[1296]: Resized filesystem in /dev/vda9 May 15 10:03:19.855403 update_engine[1309]: I0515 10:03:19.847769 1309 update_check_scheduler.cc:74] Next update check in 3m2s May 15 10:03:19.847763 systemd[1]: Started update-engine.service. May 15 10:03:19.850651 systemd[1]: Started locksmithd.service. May 15 10:03:19.852663 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:03:19.852893 systemd[1]: Finished extend-filesystems.service. May 15 10:03:19.861695 env[1316]: time="2025-05-15T10:03:19.861635440Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:03:19.900525 locksmithd[1356]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:03:19.901619 env[1316]: time="2025-05-15T10:03:19.901574120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:03:19.901743 env[1316]: time="2025-05-15T10:03:19.901720880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:03:19.905359 env[1316]: time="2025-05-15T10:03:19.905326360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:03:19.905359 env[1316]: time="2025-05-15T10:03:19.905356280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:03:19.905624 env[1316]: time="2025-05-15T10:03:19.905596080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:03:19.905624 env[1316]: time="2025-05-15T10:03:19.905619400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:03:19.905682 env[1316]: time="2025-05-15T10:03:19.905632000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:03:19.905682 env[1316]: time="2025-05-15T10:03:19.905641280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:03:19.905719 env[1316]: time="2025-05-15T10:03:19.905709000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:03:19.905934 env[1316]: time="2025-05-15T10:03:19.905909520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:03:19.906069 env[1316]: time="2025-05-15T10:03:19.906048560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:03:19.906069 env[1316]: time="2025-05-15T10:03:19.906067480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:03:19.906131 env[1316]: time="2025-05-15T10:03:19.906115200Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:03:19.906131 env[1316]: time="2025-05-15T10:03:19.906129480Z" level=info msg="metadata content store policy set" policy=shared May 15 10:03:19.909194 env[1316]: time="2025-05-15T10:03:19.909156200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:03:19.909194 env[1316]: time="2025-05-15T10:03:19.909192880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:03:19.909281 env[1316]: time="2025-05-15T10:03:19.909218280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:03:19.909281 env[1316]: time="2025-05-15T10:03:19.909250560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909281 env[1316]: time="2025-05-15T10:03:19.909264560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909281 env[1316]: time="2025-05-15T10:03:19.909277160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909366 env[1316]: time="2025-05-15T10:03:19.909289520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909635 env[1316]: time="2025-05-15T10:03:19.909609760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909671 env[1316]: time="2025-05-15T10:03:19.909634320Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909671 env[1316]: time="2025-05-15T10:03:19.909648560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909671 env[1316]: time="2025-05-15T10:03:19.909660360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:03:19.909732 env[1316]: time="2025-05-15T10:03:19.909672440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:03:19.909793 env[1316]: time="2025-05-15T10:03:19.909771920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:03:19.909873 env[1316]: time="2025-05-15T10:03:19.909854920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:03:19.910161 env[1316]: time="2025-05-15T10:03:19.910138080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:03:19.910220 env[1316]: time="2025-05-15T10:03:19.910167840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910220 env[1316]: time="2025-05-15T10:03:19.910181040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:03:19.910331 env[1316]: time="2025-05-15T10:03:19.910311440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910367 env[1316]: time="2025-05-15T10:03:19.910329960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910367 env[1316]: time="2025-05-15T10:03:19.910342880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910367 env[1316]: time="2025-05-15T10:03:19.910353880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910367 env[1316]: time="2025-05-15T10:03:19.910364720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910441 env[1316]: time="2025-05-15T10:03:19.910377000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910441 env[1316]: time="2025-05-15T10:03:19.910394760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910441 env[1316]: time="2025-05-15T10:03:19.910406920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910441 env[1316]: time="2025-05-15T10:03:19.910420240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:03:19.910553 env[1316]: time="2025-05-15T10:03:19.910534160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910619 env[1316]: time="2025-05-15T10:03:19.910558560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910619 env[1316]: time="2025-05-15T10:03:19.910572040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910619 env[1316]: time="2025-05-15T10:03:19.910583160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:03:19.910619 env[1316]: time="2025-05-15T10:03:19.910597680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:03:19.910619 env[1316]: time="2025-05-15T10:03:19.910607680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:03:19.910714 env[1316]: time="2025-05-15T10:03:19.910623240Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:03:19.910714 env[1316]: time="2025-05-15T10:03:19.910654680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:03:19.910893 env[1316]: time="2025-05-15T10:03:19.910832760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:03:19.910893 env[1316]: time="2025-05-15T10:03:19.910889240Z" level=info msg="Connect containerd service" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.910917280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.911523640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.911861920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.911901720Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.911969080Z" level=info msg="Start subscribing containerd event" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.912023880Z" level=info msg="Start recovering state" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.912083680Z" level=info msg="Start event monitor" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.912102720Z" level=info msg="Start snapshots syncer" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.912113160Z" level=info msg="Start cni network conf syncer for default" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.912120360Z" level=info msg="Start streaming server" May 15 10:03:19.913359 env[1316]: time="2025-05-15T10:03:19.913268000Z" level=info msg="containerd successfully booted in 0.053285s" May 15 10:03:19.912043 systemd[1]: Started containerd.service. May 15 10:03:20.149895 tar[1313]: linux-arm64/LICENSE May 15 10:03:20.150107 tar[1313]: linux-arm64/README.md May 15 10:03:20.154458 systemd[1]: Finished prepare-helm.service. May 15 10:03:20.286417 systemd-networkd[1101]: eth0: Gained IPv6LL May 15 10:03:20.288349 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:03:20.289413 systemd[1]: Reached target network-online.target. May 15 10:03:20.291919 systemd[1]: Starting kubelet.service... May 15 10:03:20.805044 systemd[1]: Started kubelet.service. May 15 10:03:21.301377 kubelet[1379]: E0515 10:03:21.301278 1379 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:03:21.303233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:03:21.303377 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:03:22.298479 sshd_keygen[1320]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:03:22.315493 systemd[1]: Finished sshd-keygen.service. May 15 10:03:22.317540 systemd[1]: Starting issuegen.service... May 15 10:03:22.322165 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:03:22.322415 systemd[1]: Finished issuegen.service. May 15 10:03:22.324423 systemd[1]: Starting systemd-user-sessions.service... May 15 10:03:22.331636 systemd[1]: Finished systemd-user-sessions.service. May 15 10:03:22.333593 systemd[1]: Started getty@tty1.service. May 15 10:03:22.335521 systemd[1]: Started serial-getty@ttyAMA0.service. May 15 10:03:22.336397 systemd[1]: Reached target getty.target. May 15 10:03:22.337076 systemd[1]: Reached target multi-user.target. May 15 10:03:22.338981 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:03:22.345283 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:03:22.345495 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:03:22.346410 systemd[1]: Startup finished in 5.291s (kernel) + 6.079s (userspace) = 11.371s. May 15 10:03:23.738689 systemd[1]: Created slice system-sshd.slice. May 15 10:03:23.740379 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:52728.service. May 15 10:03:23.779337 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 52728 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:03:23.781343 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:03:23.790288 systemd-logind[1306]: New session 1 of user core. May 15 10:03:23.792356 systemd[1]: Created slice user-500.slice. May 15 10:03:23.793569 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:03:23.802816 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:03:23.804271 systemd[1]: Starting user@500.service... May 15 10:03:23.807251 (systemd)[1411]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:03:23.871262 systemd[1411]: Queued start job for default target default.target. May 15 10:03:23.871489 systemd[1411]: Reached target paths.target. May 15 10:03:23.871504 systemd[1411]: Reached target sockets.target. May 15 10:03:23.871514 systemd[1411]: Reached target timers.target. May 15 10:03:23.871525 systemd[1411]: Reached target basic.target. May 15 10:03:23.871567 systemd[1411]: Reached target default.target. May 15 10:03:23.871589 systemd[1411]: Startup finished in 59ms. May 15 10:03:23.871871 systemd[1]: Started user@500.service. May 15 10:03:23.872882 systemd[1]: Started session-1.scope. May 15 10:03:23.921143 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:52736.service. May 15 10:03:23.957289 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 52736 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:03:23.959128 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:03:23.962834 systemd-logind[1306]: New session 2 of user core. May 15 10:03:23.963662 systemd[1]: Started session-2.scope. May 15 10:03:24.021454 sshd[1420]: pam_unix(sshd:session): session closed for user core May 15 10:03:24.023688 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:52746.service. May 15 10:03:24.024770 systemd-logind[1306]: Session 2 logged out. Waiting for processes to exit. May 15 10:03:24.024959 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:52736.service: Deactivated successfully. May 15 10:03:24.025683 systemd[1]: session-2.scope: Deactivated successfully. May 15 10:03:24.026053 systemd-logind[1306]: Removed session 2. May 15 10:03:24.058068 sshd[1425]: Accepted publickey for core from 10.0.0.1 port 52746 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:03:24.059146 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:03:24.062095 systemd-logind[1306]: New session 3 of user core. May 15 10:03:24.062894 systemd[1]: Started session-3.scope. May 15 10:03:24.111344 sshd[1425]: pam_unix(sshd:session): session closed for user core May 15 10:03:24.113561 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:52760.service. May 15 10:03:24.114558 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:52746.service: Deactivated successfully. May 15 10:03:24.115617 systemd-logind[1306]: Session 3 logged out. Waiting for processes to exit. May 15 10:03:24.115875 systemd[1]: session-3.scope: Deactivated successfully. May 15 10:03:24.116761 systemd-logind[1306]: Removed session 3. May 15 10:03:24.147230 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 52760 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:03:24.148548 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:03:24.152665 systemd-logind[1306]: New session 4 of user core. May 15 10:03:24.154585 systemd[1]: Started session-4.scope. May 15 10:03:24.207625 sshd[1432]: pam_unix(sshd:session): session closed for user core May 15 10:03:24.210419 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:52760.service: Deactivated successfully. May 15 10:03:24.211423 systemd-logind[1306]: Session 4 logged out. Waiting for processes to exit. May 15 10:03:24.212964 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:52764.service. May 15 10:03:24.213932 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:03:24.214816 systemd-logind[1306]: Removed session 4. May 15 10:03:24.245910 sshd[1441]: Accepted publickey for core from 10.0.0.1 port 52764 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:03:24.247040 sshd[1441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:03:24.250504 systemd-logind[1306]: New session 5 of user core. May 15 10:03:24.252438 systemd[1]: Started session-5.scope. May 15 10:03:24.312169 sudo[1445]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:03:24.313362 sudo[1445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:03:24.371572 systemd[1]: Starting docker.service... May 15 10:03:24.452534 env[1457]: time="2025-05-15T10:03:24.452477222Z" level=info msg="Starting up" May 15 10:03:24.453932 env[1457]: time="2025-05-15T10:03:24.453869443Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:03:24.453932 env[1457]: time="2025-05-15T10:03:24.453893438Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:03:24.453932 env[1457]: time="2025-05-15T10:03:24.453921042Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:03:24.453932 env[1457]: time="2025-05-15T10:03:24.453931790Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:03:24.456177 env[1457]: time="2025-05-15T10:03:24.456135673Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:03:24.456177 env[1457]: time="2025-05-15T10:03:24.456161373Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:03:24.456177 env[1457]: time="2025-05-15T10:03:24.456177476Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:03:24.456319 env[1457]: time="2025-05-15T10:03:24.456186836Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:03:24.462445 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport157677261-merged.mount: Deactivated successfully. May 15 10:03:24.652460 env[1457]: time="2025-05-15T10:03:24.652072119Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 15 10:03:24.652745 env[1457]: time="2025-05-15T10:03:24.652724900Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 15 10:03:24.652947 env[1457]: time="2025-05-15T10:03:24.652930740Z" level=info msg="Loading containers: start." May 15 10:03:24.774226 kernel: Initializing XFRM netlink socket May 15 10:03:24.799449 env[1457]: time="2025-05-15T10:03:24.799400816Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 10:03:24.862453 systemd-networkd[1101]: docker0: Link UP May 15 10:03:24.882595 env[1457]: time="2025-05-15T10:03:24.882550930Z" level=info msg="Loading containers: done." May 15 10:03:24.903332 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck369847068-merged.mount: Deactivated successfully. May 15 10:03:24.912567 env[1457]: time="2025-05-15T10:03:24.912532279Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 10:03:24.912746 env[1457]: time="2025-05-15T10:03:24.912727609Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 10:03:24.912847 env[1457]: time="2025-05-15T10:03:24.912823946Z" level=info msg="Daemon has completed initialization" May 15 10:03:24.928322 systemd[1]: Started docker.service. May 15 10:03:24.933641 env[1457]: time="2025-05-15T10:03:24.933584663Z" level=info msg="API listen on /run/docker.sock" May 15 10:03:26.496142 env[1316]: time="2025-05-15T10:03:26.496088428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 10:03:27.085852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548068928.mount: Deactivated successfully. May 15 10:03:28.846328 env[1316]: time="2025-05-15T10:03:28.846266707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:28.847434 env[1316]: time="2025-05-15T10:03:28.847399253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:28.849291 env[1316]: time="2025-05-15T10:03:28.849267844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:28.851032 env[1316]: time="2025-05-15T10:03:28.851006762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:28.851899 env[1316]: time="2025-05-15T10:03:28.851869455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 15 10:03:28.860559 env[1316]: time="2025-05-15T10:03:28.860527590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 10:03:30.569679 env[1316]: time="2025-05-15T10:03:30.569630610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:30.571357 env[1316]: time="2025-05-15T10:03:30.571321754Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:30.573095 env[1316]: time="2025-05-15T10:03:30.573048681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:30.574957 env[1316]: time="2025-05-15T10:03:30.574904715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:30.576960 env[1316]: time="2025-05-15T10:03:30.576903125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 15 10:03:30.587934 env[1316]: time="2025-05-15T10:03:30.587886392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 10:03:31.503497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 10:03:31.503688 systemd[1]: Stopped kubelet.service. May 15 10:03:31.505237 systemd[1]: Starting kubelet.service... May 15 10:03:31.597536 systemd[1]: Started kubelet.service. May 15 10:03:31.646053 kubelet[1616]: E0515 10:03:31.645985 1616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:03:31.648830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:03:31.648986 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:03:31.855074 env[1316]: time="2025-05-15T10:03:31.854961885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:31.856306 env[1316]: time="2025-05-15T10:03:31.856275100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:31.859138 env[1316]: time="2025-05-15T10:03:31.859091535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:31.860896 env[1316]: time="2025-05-15T10:03:31.860305881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:31.861290 env[1316]: time="2025-05-15T10:03:31.861263842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 15 10:03:31.870142 env[1316]: time="2025-05-15T10:03:31.870078370Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 10:03:33.159914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911628520.mount: Deactivated successfully. May 15 10:03:33.576043 env[1316]: time="2025-05-15T10:03:33.575995938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:33.577443 env[1316]: time="2025-05-15T10:03:33.577413956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:33.578808 env[1316]: time="2025-05-15T10:03:33.578781383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:33.579917 env[1316]: time="2025-05-15T10:03:33.579879059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:33.580428 env[1316]: time="2025-05-15T10:03:33.580402003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 15 10:03:33.588667 env[1316]: time="2025-05-15T10:03:33.588635012Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 10:03:34.082967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323146308.mount: Deactivated successfully. May 15 10:03:35.038499 env[1316]: time="2025-05-15T10:03:35.038436755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.040659 env[1316]: time="2025-05-15T10:03:35.040610985Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.043504 env[1316]: time="2025-05-15T10:03:35.043465286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.046337 env[1316]: time="2025-05-15T10:03:35.046302261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.046666 env[1316]: time="2025-05-15T10:03:35.046643753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 10:03:35.059177 env[1316]: time="2025-05-15T10:03:35.059138332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 10:03:35.467653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459119565.mount: Deactivated successfully. May 15 10:03:35.471370 env[1316]: time="2025-05-15T10:03:35.471330984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.472694 env[1316]: time="2025-05-15T10:03:35.472664138Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.474643 env[1316]: time="2025-05-15T10:03:35.474607021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.475947 env[1316]: time="2025-05-15T10:03:35.475919974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:35.476726 env[1316]: time="2025-05-15T10:03:35.476692824Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 15 10:03:35.485916 env[1316]: time="2025-05-15T10:03:35.485864098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 10:03:35.996835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402216412.mount: Deactivated successfully. May 15 10:03:38.540149 env[1316]: time="2025-05-15T10:03:38.540080693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:38.541644 env[1316]: time="2025-05-15T10:03:38.541615724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:38.543567 env[1316]: time="2025-05-15T10:03:38.543544281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:38.546226 env[1316]: time="2025-05-15T10:03:38.546178194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:38.547261 env[1316]: time="2025-05-15T10:03:38.547221469Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 15 10:03:41.753514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 10:03:41.753696 systemd[1]: Stopped kubelet.service. May 15 10:03:41.755407 systemd[1]: Starting kubelet.service... May 15 10:03:41.896115 systemd[1]: Started kubelet.service. May 15 10:03:41.933942 kubelet[1730]: E0515 10:03:41.933890 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:03:41.935472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:03:41.935623 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:03:45.523941 systemd[1]: Stopped kubelet.service. May 15 10:03:45.526555 systemd[1]: Starting kubelet.service... May 15 10:03:45.543667 systemd[1]: Reloading. May 15 10:03:45.594805 /usr/lib/systemd/system-generators/torcx-generator[1768]: time="2025-05-15T10:03:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:03:45.594839 /usr/lib/systemd/system-generators/torcx-generator[1768]: time="2025-05-15T10:03:45Z" level=info msg="torcx already run" May 15 10:03:45.685475 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:03:45.685496 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:03:45.705376 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:03:45.771670 systemd[1]: Started kubelet.service. May 15 10:03:45.774649 systemd[1]: Stopping kubelet.service... May 15 10:03:45.775694 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:03:45.775962 systemd[1]: Stopped kubelet.service. May 15 10:03:45.778121 systemd[1]: Starting kubelet.service... May 15 10:03:45.861369 systemd[1]: Started kubelet.service. May 15 10:03:45.900931 kubelet[1827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:03:45.900931 kubelet[1827]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:03:45.900931 kubelet[1827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:03:45.901397 kubelet[1827]: I0515 10:03:45.901180 1827 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:03:46.523983 kubelet[1827]: I0515 10:03:46.523935 1827 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:03:46.523983 kubelet[1827]: I0515 10:03:46.523968 1827 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:03:46.524222 kubelet[1827]: I0515 10:03:46.524179 1827 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:03:46.567278 kubelet[1827]: E0515 10:03:46.567245 1827 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.567854 kubelet[1827]: I0515 10:03:46.567820 1827 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:03:46.584003 kubelet[1827]: I0515 10:03:46.583948 1827 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:03:46.585546 kubelet[1827]: I0515 10:03:46.585489 1827 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:03:46.585767 kubelet[1827]: I0515 10:03:46.585546 1827 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:03:46.585844 kubelet[1827]: I0515 10:03:46.585839 1827 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:03:46.585871 kubelet[1827]: I0515 10:03:46.585850 1827 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:03:46.586711 kubelet[1827]: I0515 10:03:46.586673 1827 state_mem.go:36] "Initialized new in-memory state store" May 15 10:03:46.588252 kubelet[1827]: I0515 10:03:46.588228 1827 kubelet.go:400] "Attempting to sync node with API server" May 15 10:03:46.588372 kubelet[1827]: I0515 10:03:46.588359 1827 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:03:46.588490 kubelet[1827]: W0515 10:03:46.588402 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.588490 kubelet[1827]: E0515 10:03:46.588463 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.588667 kubelet[1827]: I0515 10:03:46.588655 1827 kubelet.go:312] "Adding apiserver pod source" May 15 10:03:46.588748 kubelet[1827]: I0515 10:03:46.588738 1827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:03:46.589396 kubelet[1827]: W0515 10:03:46.589341 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.589396 kubelet[1827]: E0515 10:03:46.589395 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.590083 kubelet[1827]: I0515 10:03:46.590055 1827 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:03:46.590610 kubelet[1827]: I0515 10:03:46.590592 1827 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:03:46.590852 kubelet[1827]: W0515 10:03:46.590841 1827 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:03:46.591858 kubelet[1827]: I0515 10:03:46.591831 1827 server.go:1264] "Started kubelet" May 15 10:03:46.594703 kubelet[1827]: I0515 10:03:46.593415 1827 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:03:46.594829 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 10:03:46.595075 kubelet[1827]: I0515 10:03:46.595049 1827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:03:46.602912 kubelet[1827]: I0515 10:03:46.602877 1827 server.go:455] "Adding debug handlers to kubelet server" May 15 10:03:46.603887 kubelet[1827]: I0515 10:03:46.603847 1827 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:03:46.603957 kubelet[1827]: I0515 10:03:46.603839 1827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:03:46.604252 kubelet[1827]: I0515 10:03:46.604235 1827 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:03:46.604973 kubelet[1827]: I0515 10:03:46.604951 1827 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:03:46.613678 kubelet[1827]: I0515 10:03:46.613646 1827 reconciler.go:26] "Reconciler: start to sync state" May 15 10:03:46.613818 kubelet[1827]: E0515 10:03:46.613785 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" May 15 10:03:46.613956 kubelet[1827]: W0515 10:03:46.613880 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.613956 kubelet[1827]: E0515 10:03:46.613934 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.620993 kubelet[1827]: I0515 10:03:46.617960 1827 factory.go:221] Registration of the systemd container factory successfully May 15 10:03:46.620993 kubelet[1827]: I0515 10:03:46.618063 1827 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:03:46.622548 kubelet[1827]: I0515 10:03:46.622499 1827 factory.go:221] Registration of the containerd container factory successfully May 15 10:03:46.632962 kubelet[1827]: E0515 10:03:46.632704 1827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fab3578d32ca8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:03:46.591763624 +0000 UTC m=+0.727053726,LastTimestamp:2025-05-15 10:03:46.591763624 +0000 UTC m=+0.727053726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:03:46.634642 kubelet[1827]: E0515 10:03:46.634600 1827 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:03:46.636914 kubelet[1827]: I0515 10:03:46.636223 1827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:03:46.637462 kubelet[1827]: I0515 10:03:46.637426 1827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:03:46.637598 kubelet[1827]: I0515 10:03:46.637582 1827 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:03:46.637635 kubelet[1827]: I0515 10:03:46.637608 1827 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:03:46.637680 kubelet[1827]: E0515 10:03:46.637657 1827 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:03:46.641466 kubelet[1827]: W0515 10:03:46.641425 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.641576 kubelet[1827]: E0515 10:03:46.641477 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:46.651332 kubelet[1827]: I0515 10:03:46.651306 1827 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:03:46.651510 kubelet[1827]: I0515 10:03:46.651495 1827 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:03:46.651578 kubelet[1827]: I0515 10:03:46.651569 1827 state_mem.go:36] "Initialized new in-memory state store" May 15 10:03:46.705861 kubelet[1827]: I0515 10:03:46.705810 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:03:46.706269 kubelet[1827]: E0515 10:03:46.706239 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" May 15 10:03:46.738611 kubelet[1827]: E0515 10:03:46.738551 1827 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 10:03:46.765732 kubelet[1827]: I0515 10:03:46.765698 1827 policy_none.go:49] "None policy: Start" May 15 10:03:46.766572 kubelet[1827]: I0515 10:03:46.766552 1827 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:03:46.766670 kubelet[1827]: I0515 10:03:46.766580 1827 state_mem.go:35] "Initializing new in-memory state store" May 15 10:03:46.773271 kubelet[1827]: I0515 10:03:46.773238 1827 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:03:46.773470 kubelet[1827]: I0515 10:03:46.773428 1827 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:03:46.773553 kubelet[1827]: I0515 10:03:46.773537 1827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:03:46.778454 kubelet[1827]: E0515 10:03:46.775335 1827 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 10:03:46.814648 kubelet[1827]: E0515 10:03:46.814592 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" May 15 10:03:46.907615 kubelet[1827]: I0515 10:03:46.907575 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:03:46.908966 kubelet[1827]: E0515 10:03:46.908278 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" May 15 10:03:46.938686 kubelet[1827]: I0515 10:03:46.938631 1827 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:03:46.940927 kubelet[1827]: I0515 10:03:46.939833 1827 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:03:46.940927 kubelet[1827]: I0515 10:03:46.940919 1827 topology_manager.go:215] "Topology Admit Handler" podUID="ae0960959c83fdfb017f61458137fafe" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:03:47.015009 kubelet[1827]: I0515 10:03:47.014971 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae0960959c83fdfb017f61458137fafe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0960959c83fdfb017f61458137fafe\") " pod="kube-system/kube-apiserver-localhost" May 15 10:03:47.015009 kubelet[1827]: I0515 10:03:47.015010 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae0960959c83fdfb017f61458137fafe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0960959c83fdfb017f61458137fafe\") " pod="kube-system/kube-apiserver-localhost" May 15 10:03:47.015160 kubelet[1827]: I0515 10:03:47.015036 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae0960959c83fdfb017f61458137fafe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae0960959c83fdfb017f61458137fafe\") " pod="kube-system/kube-apiserver-localhost" May 15 10:03:47.015160 kubelet[1827]: I0515 10:03:47.015054 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:47.015160 kubelet[1827]: I0515 10:03:47.015074 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:03:47.015160 kubelet[1827]: I0515 10:03:47.015089 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:47.015160 kubelet[1827]: I0515 10:03:47.015105 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:47.015324 kubelet[1827]: I0515 10:03:47.015119 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:47.015324 kubelet[1827]: I0515 10:03:47.015135 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:47.215696 kubelet[1827]: E0515 10:03:47.215564 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" May 15 10:03:47.243925 kubelet[1827]: E0515 10:03:47.243885 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:47.244676 env[1316]: time="2025-05-15T10:03:47.244626629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 10:03:47.248371 kubelet[1827]: E0515 10:03:47.248344 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:47.249267 env[1316]: time="2025-05-15T10:03:47.248993033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae0960959c83fdfb017f61458137fafe,Namespace:kube-system,Attempt:0,}" May 15 10:03:47.252003 kubelet[1827]: E0515 10:03:47.251961 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:47.252423 env[1316]: time="2025-05-15T10:03:47.252381821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 10:03:47.310400 kubelet[1827]: I0515 10:03:47.310356 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:03:47.310773 kubelet[1827]: E0515 10:03:47.310747 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" May 15 10:03:47.413760 kubelet[1827]: W0515 10:03:47.413689 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:47.413760 kubelet[1827]: E0515 10:03:47.413759 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:47.473430 kubelet[1827]: E0515 10:03:47.473250 1827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fab3578d32ca8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:03:46.591763624 +0000 UTC m=+0.727053726,LastTimestamp:2025-05-15 10:03:46.591763624 +0000 UTC m=+0.727053726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:03:47.767308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380618521.mount: Deactivated successfully. May 15 10:03:47.776567 env[1316]: time="2025-05-15T10:03:47.776514158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.779171 env[1316]: time="2025-05-15T10:03:47.779127571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.781235 env[1316]: time="2025-05-15T10:03:47.781187921Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.783647 env[1316]: time="2025-05-15T10:03:47.783608049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.785868 env[1316]: time="2025-05-15T10:03:47.785824858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.788310 env[1316]: time="2025-05-15T10:03:47.788270577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.789881 env[1316]: time="2025-05-15T10:03:47.789848437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.793297 env[1316]: time="2025-05-15T10:03:47.793263174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.794835 env[1316]: time="2025-05-15T10:03:47.794802729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.795529 env[1316]: time="2025-05-15T10:03:47.795499735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.797665 env[1316]: time="2025-05-15T10:03:47.797624580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.798321 env[1316]: time="2025-05-15T10:03:47.798296716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:03:47.829312 env[1316]: time="2025-05-15T10:03:47.829227598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:03:47.829312 env[1316]: time="2025-05-15T10:03:47.829284415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:03:47.829312 env[1316]: time="2025-05-15T10:03:47.829298410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:03:47.829576 env[1316]: time="2025-05-15T10:03:47.829537996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab472b11e7af13fffe93b37f788d17bc3fdb46a1621843619845861b2985c94 pid=1872 runtime=io.containerd.runc.v2 May 15 10:03:47.830054 env[1316]: time="2025-05-15T10:03:47.830003133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:03:47.830054 env[1316]: time="2025-05-15T10:03:47.830042397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:03:47.830145 env[1316]: time="2025-05-15T10:03:47.830052593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:03:47.830307 env[1316]: time="2025-05-15T10:03:47.830259232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07edc4acf7346c5910685530f9b2bb80a5ba41c870367334764a412c6c18ec3f pid=1876 runtime=io.containerd.runc.v2 May 15 10:03:47.834889 env[1316]: time="2025-05-15T10:03:47.834820439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:03:47.834889 env[1316]: time="2025-05-15T10:03:47.834861823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:03:47.835080 env[1316]: time="2025-05-15T10:03:47.835043552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:03:47.835329 env[1316]: time="2025-05-15T10:03:47.835287216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5980ae580f3bb64e99db1c53b3e94ff7805c1ad8b8c0ac5cf95c7948b7ea000d pid=1901 runtime=io.containerd.runc.v2 May 15 10:03:47.884248 kubelet[1827]: W0515 10:03:47.884134 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:47.884248 kubelet[1827]: E0515 10:03:47.884222 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:47.899999 env[1316]: time="2025-05-15T10:03:47.899951838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae0960959c83fdfb017f61458137fafe,Namespace:kube-system,Attempt:0,} returns sandbox id \"dab472b11e7af13fffe93b37f788d17bc3fdb46a1621843619845861b2985c94\"" May 15 10:03:47.901191 kubelet[1827]: E0515 10:03:47.900914 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:47.904686 env[1316]: time="2025-05-15T10:03:47.904643873Z" level=info msg="CreateContainer within sandbox \"dab472b11e7af13fffe93b37f788d17bc3fdb46a1621843619845861b2985c94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 10:03:47.909087 env[1316]: time="2025-05-15T10:03:47.909039426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"07edc4acf7346c5910685530f9b2bb80a5ba41c870367334764a412c6c18ec3f\"" May 15 10:03:47.909874 kubelet[1827]: E0515 10:03:47.909849 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:47.912330 env[1316]: time="2025-05-15T10:03:47.912286509Z" level=info msg="CreateContainer within sandbox \"07edc4acf7346c5910685530f9b2bb80a5ba41c870367334764a412c6c18ec3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 10:03:47.920709 env[1316]: time="2025-05-15T10:03:47.920661777Z" level=info msg="CreateContainer within sandbox \"dab472b11e7af13fffe93b37f788d17bc3fdb46a1621843619845861b2985c94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1add713dc0e36019fa71954888f341202a3ba33e24ac0a721deb45552527f06c\"" May 15 10:03:47.921311 env[1316]: time="2025-05-15T10:03:47.921277775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"5980ae580f3bb64e99db1c53b3e94ff7805c1ad8b8c0ac5cf95c7948b7ea000d\"" May 15 10:03:47.921497 env[1316]: time="2025-05-15T10:03:47.921474138Z" level=info msg="StartContainer for \"1add713dc0e36019fa71954888f341202a3ba33e24ac0a721deb45552527f06c\"" May 15 10:03:47.922391 kubelet[1827]: E0515 10:03:47.922128 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:47.924330 env[1316]: time="2025-05-15T10:03:47.924287912Z" level=info msg="CreateContainer within sandbox \"5980ae580f3bb64e99db1c53b3e94ff7805c1ad8b8c0ac5cf95c7948b7ea000d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 10:03:47.926373 env[1316]: time="2025-05-15T10:03:47.926309117Z" level=info msg="CreateContainer within sandbox \"07edc4acf7346c5910685530f9b2bb80a5ba41c870367334764a412c6c18ec3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e2237715cfe72747b6e536a569c09cdb7c2bdcb347a46c7cbd908c60fdba6f2\"" May 15 10:03:47.928639 env[1316]: time="2025-05-15T10:03:47.928519608Z" level=info msg="StartContainer for \"8e2237715cfe72747b6e536a569c09cdb7c2bdcb347a46c7cbd908c60fdba6f2\"" May 15 10:03:47.942359 env[1316]: time="2025-05-15T10:03:47.942302991Z" level=info msg="CreateContainer within sandbox \"5980ae580f3bb64e99db1c53b3e94ff7805c1ad8b8c0ac5cf95c7948b7ea000d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5d48f2b806848810f8ffedc9d127a35a1156fc773d2c305cd76ed5184d941dd\"" May 15 10:03:47.942953 env[1316]: time="2025-05-15T10:03:47.942923547Z" level=info msg="StartContainer for \"a5d48f2b806848810f8ffedc9d127a35a1156fc773d2c305cd76ed5184d941dd\"" May 15 10:03:48.016642 kubelet[1827]: E0515 10:03:48.016515 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" May 15 10:03:48.021527 env[1316]: time="2025-05-15T10:03:48.021422654Z" level=info msg="StartContainer for \"8e2237715cfe72747b6e536a569c09cdb7c2bdcb347a46c7cbd908c60fdba6f2\" returns successfully" May 15 10:03:48.038260 env[1316]: time="2025-05-15T10:03:48.038177371Z" level=info msg="StartContainer for \"a5d48f2b806848810f8ffedc9d127a35a1156fc773d2c305cd76ed5184d941dd\" returns successfully" May 15 10:03:48.049680 env[1316]: time="2025-05-15T10:03:48.049616877Z" level=info msg="StartContainer for \"1add713dc0e36019fa71954888f341202a3ba33e24ac0a721deb45552527f06c\" returns successfully" May 15 10:03:48.059844 kubelet[1827]: W0515 10:03:48.059731 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:48.059844 kubelet[1827]: E0515 10:03:48.059800 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:48.113195 kubelet[1827]: I0515 10:03:48.112826 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:03:48.113195 kubelet[1827]: E0515 10:03:48.113155 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" May 15 10:03:48.123187 kubelet[1827]: W0515 10:03:48.123077 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:48.123187 kubelet[1827]: E0515 10:03:48.123143 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused May 15 10:03:48.647280 kubelet[1827]: E0515 10:03:48.647251 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:48.649813 kubelet[1827]: E0515 10:03:48.649783 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:48.651973 kubelet[1827]: E0515 10:03:48.651948 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:49.619744 kubelet[1827]: E0515 10:03:49.619705 1827 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 10:03:49.653643 kubelet[1827]: E0515 10:03:49.653612 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:49.715117 kubelet[1827]: I0515 10:03:49.715088 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:03:49.722628 kubelet[1827]: I0515 10:03:49.722601 1827 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:03:49.730870 kubelet[1827]: E0515 10:03:49.730843 1827 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:03:49.831632 kubelet[1827]: E0515 10:03:49.831594 1827 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:03:49.932190 kubelet[1827]: E0515 10:03:49.932093 1827 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:03:50.032713 kubelet[1827]: E0515 10:03:50.032678 1827 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:03:50.133473 kubelet[1827]: E0515 10:03:50.133435 1827 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:03:50.234268 kubelet[1827]: E0515 10:03:50.234149 1827 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:03:50.590680 kubelet[1827]: I0515 10:03:50.590648 1827 apiserver.go:52] "Watching apiserver" May 15 10:03:50.605276 kubelet[1827]: I0515 10:03:50.605245 1827 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:03:51.328963 systemd[1]: Reloading. May 15 10:03:51.389016 /usr/lib/systemd/system-generators/torcx-generator[2124]: time="2025-05-15T10:03:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:03:51.389048 /usr/lib/systemd/system-generators/torcx-generator[2124]: time="2025-05-15T10:03:51Z" level=info msg="torcx already run" May 15 10:03:51.468668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:03:51.468689 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:03:51.488685 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:03:51.566022 systemd[1]: Stopping kubelet.service... May 15 10:03:51.584822 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:03:51.585114 systemd[1]: Stopped kubelet.service. May 15 10:03:51.587065 systemd[1]: Starting kubelet.service... May 15 10:03:51.675464 systemd[1]: Started kubelet.service. May 15 10:03:51.717460 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:03:51.717460 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:03:51.717460 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:03:51.717819 kubelet[2175]: I0515 10:03:51.717501 2175 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:03:51.723038 kubelet[2175]: I0515 10:03:51.723012 2175 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:03:51.723156 kubelet[2175]: I0515 10:03:51.723145 2175 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:03:51.723555 kubelet[2175]: I0515 10:03:51.723532 2175 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:03:51.725260 kubelet[2175]: I0515 10:03:51.725238 2175 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 10:03:51.728359 kubelet[2175]: I0515 10:03:51.728337 2175 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:03:51.736782 kubelet[2175]: I0515 10:03:51.736758 2175 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:03:51.737164 kubelet[2175]: I0515 10:03:51.737137 2175 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:03:51.737336 kubelet[2175]: I0515 10:03:51.737164 2175 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:03:51.737430 kubelet[2175]: I0515 10:03:51.737343 2175 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:03:51.737430 kubelet[2175]: I0515 10:03:51.737353 2175 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:03:51.737430 kubelet[2175]: I0515 10:03:51.737386 2175 state_mem.go:36] "Initialized new in-memory state store" May 15 10:03:51.737508 kubelet[2175]: I0515 10:03:51.737479 2175 kubelet.go:400] "Attempting to sync node with API server" May 15 10:03:51.737508 kubelet[2175]: I0515 10:03:51.737490 2175 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:03:51.737552 kubelet[2175]: I0515 10:03:51.737513 2175 kubelet.go:312] "Adding apiserver pod source" May 15 10:03:51.737552 kubelet[2175]: I0515 10:03:51.737530 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:03:51.738261 kubelet[2175]: I0515 10:03:51.738236 2175 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:03:51.738701 kubelet[2175]: I0515 10:03:51.738633 2175 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:03:51.739160 kubelet[2175]: I0515 10:03:51.739088 2175 server.go:1264] "Started kubelet" May 15 10:03:51.741047 kubelet[2175]: I0515 10:03:51.741009 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:03:51.743493 kubelet[2175]: I0515 10:03:51.743442 2175 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:03:51.744225 kubelet[2175]: I0515 10:03:51.744184 2175 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:03:51.744525 kubelet[2175]: I0515 10:03:51.744507 2175 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:03:51.744904 kubelet[2175]: I0515 10:03:51.744842 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:03:51.745366 kubelet[2175]: I0515 10:03:51.745349 2175 reconciler.go:26] "Reconciler: start to sync state" May 15 10:03:51.745497 kubelet[2175]: I0515 10:03:51.745411 2175 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:03:51.746308 kubelet[2175]: I0515 10:03:51.746259 2175 server.go:455] "Adding debug handlers to kubelet server" May 15 10:03:51.759872 kubelet[2175]: I0515 10:03:51.759836 2175 factory.go:221] Registration of the systemd container factory successfully May 15 10:03:51.759984 kubelet[2175]: I0515 10:03:51.759958 2175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:03:51.765994 kubelet[2175]: I0515 10:03:51.765063 2175 factory.go:221] Registration of the containerd container factory successfully May 15 10:03:51.766253 kubelet[2175]: E0515 10:03:51.766181 2175 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:03:51.778618 kubelet[2175]: I0515 10:03:51.778441 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:03:51.780944 kubelet[2175]: I0515 10:03:51.780890 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:03:51.780944 kubelet[2175]: I0515 10:03:51.780935 2175 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:03:51.780944 kubelet[2175]: I0515 10:03:51.780954 2175 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:03:51.781087 kubelet[2175]: E0515 10:03:51.781004 2175 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:03:51.816208 kubelet[2175]: I0515 10:03:51.816151 2175 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:03:51.816208 kubelet[2175]: I0515 10:03:51.816180 2175 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:03:51.816343 kubelet[2175]: I0515 10:03:51.816231 2175 state_mem.go:36] "Initialized new in-memory state store" May 15 10:03:51.816411 kubelet[2175]: I0515 10:03:51.816389 2175 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 10:03:51.816449 kubelet[2175]: I0515 10:03:51.816409 2175 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 10:03:51.816449 kubelet[2175]: I0515 10:03:51.816431 2175 policy_none.go:49] "None policy: Start" May 15 10:03:51.817064 kubelet[2175]: I0515 10:03:51.817044 2175 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:03:51.817142 kubelet[2175]: I0515 10:03:51.817095 2175 state_mem.go:35] "Initializing new in-memory state store" May 15 10:03:51.817302 kubelet[2175]: I0515 10:03:51.817285 2175 state_mem.go:75] "Updated machine memory state" May 15 10:03:51.818504 kubelet[2175]: I0515 10:03:51.818485 2175 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:03:51.818685 kubelet[2175]: I0515 10:03:51.818646 2175 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:03:51.818772 kubelet[2175]: I0515 10:03:51.818758 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:03:51.848110 kubelet[2175]: I0515 10:03:51.848022 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:03:51.855346 kubelet[2175]: I0515 10:03:51.855315 2175 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 10:03:51.855450 kubelet[2175]: I0515 10:03:51.855403 2175 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:03:51.881612 kubelet[2175]: I0515 10:03:51.881547 2175 topology_manager.go:215] "Topology Admit Handler" podUID="ae0960959c83fdfb017f61458137fafe" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:03:51.881736 kubelet[2175]: I0515 10:03:51.881686 2175 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:03:51.881787 kubelet[2175]: I0515 10:03:51.881754 2175 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:03:51.946432 kubelet[2175]: I0515 10:03:51.946396 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:51.946633 kubelet[2175]: I0515 10:03:51.946617 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:03:51.946746 kubelet[2175]: I0515 10:03:51.946725 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae0960959c83fdfb017f61458137fafe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0960959c83fdfb017f61458137fafe\") " pod="kube-system/kube-apiserver-localhost" May 15 10:03:51.946933 kubelet[2175]: I0515 10:03:51.946916 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae0960959c83fdfb017f61458137fafe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0960959c83fdfb017f61458137fafe\") " pod="kube-system/kube-apiserver-localhost" May 15 10:03:51.947074 kubelet[2175]: I0515 10:03:51.947052 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae0960959c83fdfb017f61458137fafe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae0960959c83fdfb017f61458137fafe\") " pod="kube-system/kube-apiserver-localhost" May 15 10:03:51.947225 kubelet[2175]: I0515 10:03:51.947185 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:51.947330 kubelet[2175]: I0515 10:03:51.947315 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:51.947419 kubelet[2175]: I0515 10:03:51.947407 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:51.947528 kubelet[2175]: I0515 10:03:51.947513 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:03:52.196461 kubelet[2175]: E0515 10:03:52.196365 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:52.196576 kubelet[2175]: E0515 10:03:52.196479 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:52.196744 kubelet[2175]: E0515 10:03:52.196726 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:52.330312 sudo[2210]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 10:03:52.330876 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 10:03:52.738222 kubelet[2175]: I0515 10:03:52.738160 2175 apiserver.go:52] "Watching apiserver" May 15 10:03:52.744798 kubelet[2175]: I0515 10:03:52.744772 2175 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:03:52.791575 sudo[2210]: pam_unix(sudo:session): session closed for user root May 15 10:03:52.793287 kubelet[2175]: E0515 10:03:52.793234 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:52.793287 kubelet[2175]: E0515 10:03:52.793286 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:52.881858 kubelet[2175]: E0515 10:03:52.881821 2175 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:03:52.882461 kubelet[2175]: E0515 10:03:52.882441 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:52.890568 kubelet[2175]: I0515 10:03:52.890514 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.890499369 podStartE2EDuration="1.890499369s" podCreationTimestamp="2025-05-15 10:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:03:52.881652078 +0000 UTC m=+1.201593745" watchObservedRunningTime="2025-05-15 10:03:52.890499369 +0000 UTC m=+1.210441036" May 15 10:03:52.899295 kubelet[2175]: I0515 10:03:52.899256 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.899242585 podStartE2EDuration="1.899242585s" podCreationTimestamp="2025-05-15 10:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:03:52.890825911 +0000 UTC m=+1.210767578" watchObservedRunningTime="2025-05-15 10:03:52.899242585 +0000 UTC m=+1.219184252" May 15 10:03:52.911648 kubelet[2175]: I0515 10:03:52.907239 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.907223282 podStartE2EDuration="1.907223282s" podCreationTimestamp="2025-05-15 10:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:03:52.89951853 +0000 UTC m=+1.219460197" watchObservedRunningTime="2025-05-15 10:03:52.907223282 +0000 UTC m=+1.227164989" May 15 10:03:53.794667 kubelet[2175]: E0515 10:03:53.794626 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:53.796564 kubelet[2175]: E0515 10:03:53.794763 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:54.567490 kubelet[2175]: E0515 10:03:54.567450 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:54.609629 sudo[1445]: pam_unix(sudo:session): session closed for user root May 15 10:03:54.611233 sshd[1441]: pam_unix(sshd:session): session closed for user core May 15 10:03:54.614385 systemd-logind[1306]: Session 5 logged out. Waiting for processes to exit. May 15 10:03:54.614707 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:52764.service: Deactivated successfully. May 15 10:03:54.615612 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:03:54.616090 systemd-logind[1306]: Removed session 5. May 15 10:03:54.796116 kubelet[2175]: E0515 10:03:54.796073 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:03:59.843852 kubelet[2175]: E0515 10:03:59.843817 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:00.808026 kubelet[2175]: E0515 10:04:00.805554 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:01.806870 kubelet[2175]: E0515 10:04:01.806825 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:03.121847 kubelet[2175]: E0515 10:04:03.121805 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:04.573993 kubelet[2175]: E0515 10:04:04.573964 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:04.983309 update_engine[1309]: I0515 10:04:04.982951 1309 update_attempter.cc:509] Updating boot flags... May 15 10:04:05.562717 kubelet[2175]: I0515 10:04:05.562675 2175 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 10:04:05.563068 env[1316]: time="2025-05-15T10:04:05.563033807Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:04:05.563356 kubelet[2175]: I0515 10:04:05.563340 2175 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 10:04:06.508828 kubelet[2175]: I0515 10:04:06.508779 2175 topology_manager.go:215] "Topology Admit Handler" podUID="846006c2-51f2-43df-94d8-45baacf46935" podNamespace="kube-system" podName="kube-proxy-qp5b9" May 15 10:04:06.515170 kubelet[2175]: I0515 10:04:06.515125 2175 topology_manager.go:215] "Topology Admit Handler" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" podNamespace="kube-system" podName="cilium-vwz6w" May 15 10:04:06.556418 kubelet[2175]: I0515 10:04:06.556373 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-lib-modules\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556418 kubelet[2175]: I0515 10:04:06.556413 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-xtables-lock\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556599 kubelet[2175]: I0515 10:04:06.556437 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-net\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556599 kubelet[2175]: I0515 10:04:06.556457 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-hostproc\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556599 kubelet[2175]: I0515 10:04:06.556473 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrhz\" (UniqueName: \"kubernetes.io/projected/846006c2-51f2-43df-94d8-45baacf46935-kube-api-access-ncrhz\") pod \"kube-proxy-qp5b9\" (UID: \"846006c2-51f2-43df-94d8-45baacf46935\") " pod="kube-system/kube-proxy-qp5b9" May 15 10:04:06.556599 kubelet[2175]: I0515 10:04:06.556493 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cni-path\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556599 kubelet[2175]: I0515 10:04:06.556510 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-config-path\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556722 kubelet[2175]: I0515 10:04:06.556527 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-kernel\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556722 kubelet[2175]: I0515 10:04:06.556544 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-cgroup\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556722 kubelet[2175]: I0515 10:04:06.556560 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c0a55a2-89a2-4518-a008-625c4c63b850-clustermesh-secrets\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556722 kubelet[2175]: I0515 10:04:06.556576 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-hubble-tls\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556722 kubelet[2175]: I0515 10:04:06.556624 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-bpf-maps\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556722 kubelet[2175]: I0515 10:04:06.556663 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-etc-cni-netd\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556847 kubelet[2175]: I0515 10:04:06.556710 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbshp\" (UniqueName: \"kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-kube-api-access-zbshp\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556847 kubelet[2175]: I0515 10:04:06.556740 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-run\") pod \"cilium-vwz6w\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " pod="kube-system/cilium-vwz6w" May 15 10:04:06.556847 kubelet[2175]: I0515 10:04:06.556760 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/846006c2-51f2-43df-94d8-45baacf46935-kube-proxy\") pod \"kube-proxy-qp5b9\" (UID: \"846006c2-51f2-43df-94d8-45baacf46935\") " pod="kube-system/kube-proxy-qp5b9" May 15 10:04:06.556847 kubelet[2175]: I0515 10:04:06.556791 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/846006c2-51f2-43df-94d8-45baacf46935-xtables-lock\") pod \"kube-proxy-qp5b9\" (UID: \"846006c2-51f2-43df-94d8-45baacf46935\") " pod="kube-system/kube-proxy-qp5b9" May 15 10:04:06.556847 kubelet[2175]: I0515 10:04:06.556807 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/846006c2-51f2-43df-94d8-45baacf46935-lib-modules\") pod \"kube-proxy-qp5b9\" (UID: \"846006c2-51f2-43df-94d8-45baacf46935\") " pod="kube-system/kube-proxy-qp5b9" May 15 10:04:06.651629 kubelet[2175]: I0515 10:04:06.651561 2175 topology_manager.go:215] "Topology Admit Handler" podUID="0bc94d72-24a7-49e3-8072-97a476a47a0a" podNamespace="kube-system" podName="cilium-operator-599987898-zptlp" May 15 10:04:06.758293 kubelet[2175]: I0515 10:04:06.758240 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bc94d72-24a7-49e3-8072-97a476a47a0a-cilium-config-path\") pod \"cilium-operator-599987898-zptlp\" (UID: \"0bc94d72-24a7-49e3-8072-97a476a47a0a\") " pod="kube-system/cilium-operator-599987898-zptlp" May 15 10:04:06.758293 kubelet[2175]: I0515 10:04:06.758293 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzkp\" (UniqueName: \"kubernetes.io/projected/0bc94d72-24a7-49e3-8072-97a476a47a0a-kube-api-access-vxzkp\") pod \"cilium-operator-599987898-zptlp\" (UID: \"0bc94d72-24a7-49e3-8072-97a476a47a0a\") " pod="kube-system/cilium-operator-599987898-zptlp" May 15 10:04:06.812368 kubelet[2175]: E0515 10:04:06.812267 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:06.814360 env[1316]: time="2025-05-15T10:04:06.814314543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qp5b9,Uid:846006c2-51f2-43df-94d8-45baacf46935,Namespace:kube-system,Attempt:0,}" May 15 10:04:06.817842 kubelet[2175]: E0515 10:04:06.817803 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:06.818514 env[1316]: time="2025-05-15T10:04:06.818196006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwz6w,Uid:3c0a55a2-89a2-4518-a008-625c4c63b850,Namespace:kube-system,Attempt:0,}" May 15 10:04:06.843743 env[1316]: time="2025-05-15T10:04:06.843538093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:04:06.843743 env[1316]: time="2025-05-15T10:04:06.843590091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:04:06.843743 env[1316]: time="2025-05-15T10:04:06.843601091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:04:06.843993 env[1316]: time="2025-05-15T10:04:06.843804286Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb pid=2292 runtime=io.containerd.runc.v2 May 15 10:04:06.845044 env[1316]: time="2025-05-15T10:04:06.844987296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:04:06.845044 env[1316]: time="2025-05-15T10:04:06.845022696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:04:06.845044 env[1316]: time="2025-05-15T10:04:06.845039615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:04:06.845257 env[1316]: time="2025-05-15T10:04:06.845172652Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9de5fb149d1d600d2f2d99b033c8363b4986ec7d54656d979a612044aeba4f62 pid=2293 runtime=io.containerd.runc.v2 May 15 10:04:06.924529 env[1316]: time="2025-05-15T10:04:06.924482670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qp5b9,Uid:846006c2-51f2-43df-94d8-45baacf46935,Namespace:kube-system,Attempt:0,} returns sandbox id \"9de5fb149d1d600d2f2d99b033c8363b4986ec7d54656d979a612044aeba4f62\"" May 15 10:04:06.924688 env[1316]: time="2025-05-15T10:04:06.924508789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vwz6w,Uid:3c0a55a2-89a2-4518-a008-625c4c63b850,Namespace:kube-system,Attempt:0,} returns sandbox id \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\"" May 15 10:04:06.925308 kubelet[2175]: E0515 10:04:06.925285 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:06.925422 kubelet[2175]: E0515 10:04:06.925399 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:06.928639 env[1316]: time="2025-05-15T10:04:06.928594407Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 10:04:06.930048 env[1316]: time="2025-05-15T10:04:06.929766738Z" level=info msg="CreateContainer within sandbox \"9de5fb149d1d600d2f2d99b033c8363b4986ec7d54656d979a612044aeba4f62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:04:06.960693 kubelet[2175]: E0515 10:04:06.960577 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:06.961790 env[1316]: time="2025-05-15T10:04:06.961632342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zptlp,Uid:0bc94d72-24a7-49e3-8072-97a476a47a0a,Namespace:kube-system,Attempt:0,}" May 15 10:04:06.972373 env[1316]: time="2025-05-15T10:04:06.972316635Z" level=info msg="CreateContainer within sandbox \"9de5fb149d1d600d2f2d99b033c8363b4986ec7d54656d979a612044aeba4f62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38dfdbd517d6f61f223cd789991074222601f53164c5cad562d7eb7871dfaf67\"" May 15 10:04:06.973050 env[1316]: time="2025-05-15T10:04:06.973015137Z" level=info msg="StartContainer for \"38dfdbd517d6f61f223cd789991074222601f53164c5cad562d7eb7871dfaf67\"" May 15 10:04:06.979561 env[1316]: time="2025-05-15T10:04:06.979475736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:04:06.979561 env[1316]: time="2025-05-15T10:04:06.979537214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:04:06.979819 env[1316]: time="2025-05-15T10:04:06.979698570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:04:06.979894 env[1316]: time="2025-05-15T10:04:06.979857126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631 pid=2373 runtime=io.containerd.runc.v2 May 15 10:04:07.061349 env[1316]: time="2025-05-15T10:04:07.061303043Z" level=info msg="StartContainer for \"38dfdbd517d6f61f223cd789991074222601f53164c5cad562d7eb7871dfaf67\" returns successfully" May 15 10:04:07.076939 env[1316]: time="2025-05-15T10:04:07.076826954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zptlp,Uid:0bc94d72-24a7-49e3-8072-97a476a47a0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631\"" May 15 10:04:07.078914 kubelet[2175]: E0515 10:04:07.077966 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:07.821140 kubelet[2175]: E0515 10:04:07.820644 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:07.831000 kubelet[2175]: I0515 10:04:07.830708 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qp5b9" podStartSLOduration=1.830688857 podStartE2EDuration="1.830688857s" podCreationTimestamp="2025-05-15 10:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:04:07.830478142 +0000 UTC m=+16.150419809" watchObservedRunningTime="2025-05-15 10:04:07.830688857 +0000 UTC m=+16.150630524" May 15 10:04:11.199396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541009304.mount: Deactivated successfully. May 15 10:04:13.387260 env[1316]: time="2025-05-15T10:04:13.387180075Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:04:13.391420 env[1316]: time="2025-05-15T10:04:13.391384039Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:04:13.393079 env[1316]: time="2025-05-15T10:04:13.393043209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:04:13.393614 env[1316]: time="2025-05-15T10:04:13.393589239Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 10:04:13.395688 env[1316]: time="2025-05-15T10:04:13.395658842Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 10:04:13.397886 env[1316]: time="2025-05-15T10:04:13.397855562Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:04:13.407644 env[1316]: time="2025-05-15T10:04:13.407455669Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\"" May 15 10:04:13.408083 env[1316]: time="2025-05-15T10:04:13.408056378Z" level=info msg="StartContainer for \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\"" May 15 10:04:13.496593 env[1316]: time="2025-05-15T10:04:13.496531061Z" level=info msg="StartContainer for \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\" returns successfully" May 15 10:04:13.562295 env[1316]: time="2025-05-15T10:04:13.562249915Z" level=info msg="shim disconnected" id=c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a May 15 10:04:13.562547 env[1316]: time="2025-05-15T10:04:13.562324834Z" level=warning msg="cleaning up after shim disconnected" id=c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a namespace=k8s.io May 15 10:04:13.562547 env[1316]: time="2025-05-15T10:04:13.562336954Z" level=info msg="cleaning up dead shim" May 15 10:04:13.569561 env[1316]: time="2025-05-15T10:04:13.569514464Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2606 runtime=io.containerd.runc.v2\n" May 15 10:04:13.837300 kubelet[2175]: E0515 10:04:13.836701 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:13.843193 env[1316]: time="2025-05-15T10:04:13.843002449Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:04:13.855184 env[1316]: time="2025-05-15T10:04:13.855136110Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\"" May 15 10:04:13.856230 env[1316]: time="2025-05-15T10:04:13.855634861Z" level=info msg="StartContainer for \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\"" May 15 10:04:13.910785 env[1316]: time="2025-05-15T10:04:13.910726187Z" level=info msg="StartContainer for \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\" returns successfully" May 15 10:04:13.933575 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:04:13.933836 systemd[1]: Stopped systemd-sysctl.service. May 15 10:04:13.933992 systemd[1]: Stopping systemd-sysctl.service... May 15 10:04:13.935562 systemd[1]: Starting systemd-sysctl.service... May 15 10:04:13.945784 systemd[1]: Finished systemd-sysctl.service. May 15 10:04:13.958036 env[1316]: time="2025-05-15T10:04:13.957991014Z" level=info msg="shim disconnected" id=78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98 May 15 10:04:13.958036 env[1316]: time="2025-05-15T10:04:13.958034653Z" level=warning msg="cleaning up after shim disconnected" id=78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98 namespace=k8s.io May 15 10:04:13.958036 env[1316]: time="2025-05-15T10:04:13.958044213Z" level=info msg="cleaning up dead shim" May 15 10:04:13.966318 env[1316]: time="2025-05-15T10:04:13.966282704Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2671 runtime=io.containerd.runc.v2\n" May 15 10:04:14.405923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a-rootfs.mount: Deactivated successfully. May 15 10:04:14.639888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616134852.mount: Deactivated successfully. May 15 10:04:14.839182 kubelet[2175]: E0515 10:04:14.839126 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:14.848267 env[1316]: time="2025-05-15T10:04:14.843001763Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:04:14.865630 env[1316]: time="2025-05-15T10:04:14.857757748Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\"" May 15 10:04:14.866571 env[1316]: time="2025-05-15T10:04:14.866370959Z" level=info msg="StartContainer for \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\"" May 15 10:04:14.943282 env[1316]: time="2025-05-15T10:04:14.941612779Z" level=info msg="StartContainer for \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\" returns successfully" May 15 10:04:15.027026 env[1316]: time="2025-05-15T10:04:15.026982081Z" level=info msg="shim disconnected" id=e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c May 15 10:04:15.027347 env[1316]: time="2025-05-15T10:04:15.027328396Z" level=warning msg="cleaning up after shim disconnected" id=e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c namespace=k8s.io May 15 10:04:15.027438 env[1316]: time="2025-05-15T10:04:15.027423194Z" level=info msg="cleaning up dead shim" May 15 10:04:15.042135 env[1316]: time="2025-05-15T10:04:15.042093991Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:04:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2729 runtime=io.containerd.runc.v2\n" May 15 10:04:15.156368 env[1316]: time="2025-05-15T10:04:15.156250019Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:04:15.161778 env[1316]: time="2025-05-15T10:04:15.161729608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:04:15.163240 env[1316]: time="2025-05-15T10:04:15.163215023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:04:15.163652 env[1316]: time="2025-05-15T10:04:15.163621617Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 10:04:15.168097 env[1316]: time="2025-05-15T10:04:15.166344931Z" level=info msg="CreateContainer within sandbox \"3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 10:04:15.177109 env[1316]: time="2025-05-15T10:04:15.177022875Z" level=info msg="CreateContainer within sandbox \"3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\"" May 15 10:04:15.178192 env[1316]: time="2025-05-15T10:04:15.177542986Z" level=info msg="StartContainer for \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\"" May 15 10:04:15.248428 env[1316]: time="2025-05-15T10:04:15.248367932Z" level=info msg="StartContainer for \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\" returns successfully" May 15 10:04:15.842505 kubelet[2175]: E0515 10:04:15.842442 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:15.845532 kubelet[2175]: E0515 10:04:15.845498 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:15.848524 env[1316]: time="2025-05-15T10:04:15.848467866Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:04:15.856554 kubelet[2175]: I0515 10:04:15.856477 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zptlp" podStartSLOduration=1.771016791 podStartE2EDuration="9.856461894s" podCreationTimestamp="2025-05-15 10:04:06 +0000 UTC" firstStartedPulling="2025-05-15 10:04:07.079334254 +0000 UTC m=+15.399275881" lastFinishedPulling="2025-05-15 10:04:15.164779317 +0000 UTC m=+23.484720984" observedRunningTime="2025-05-15 10:04:15.855962942 +0000 UTC m=+24.175904609" watchObservedRunningTime="2025-05-15 10:04:15.856461894 +0000 UTC m=+24.176403561" May 15 10:04:15.870538 env[1316]: time="2025-05-15T10:04:15.870475461Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\"" May 15 10:04:15.871268 env[1316]: time="2025-05-15T10:04:15.871236249Z" level=info msg="StartContainer for \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\"" May 15 10:04:15.981371 env[1316]: time="2025-05-15T10:04:15.981326024Z" level=info msg="StartContainer for \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\" returns successfully" May 15 10:04:16.038285 env[1316]: time="2025-05-15T10:04:16.038227146Z" level=info msg="shim disconnected" id=c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73 May 15 10:04:16.038285 env[1316]: time="2025-05-15T10:04:16.038287145Z" level=warning msg="cleaning up after shim disconnected" id=c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73 namespace=k8s.io May 15 10:04:16.038533 env[1316]: time="2025-05-15T10:04:16.038299425Z" level=info msg="cleaning up dead shim" May 15 10:04:16.045184 env[1316]: time="2025-05-15T10:04:16.045117396Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:04:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2821 runtime=io.containerd.runc.v2\n" May 15 10:04:16.405879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73-rootfs.mount: Deactivated successfully. May 15 10:04:16.849631 kubelet[2175]: E0515 10:04:16.849469 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:16.849995 kubelet[2175]: E0515 10:04:16.849758 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:16.851963 env[1316]: time="2025-05-15T10:04:16.851924483Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:04:16.866650 env[1316]: time="2025-05-15T10:04:16.866605329Z" level=info msg="CreateContainer within sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\"" May 15 10:04:16.869445 env[1316]: time="2025-05-15T10:04:16.869408965Z" level=info msg="StartContainer for \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\"" May 15 10:04:16.932739 env[1316]: time="2025-05-15T10:04:16.932694998Z" level=info msg="StartContainer for \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\" returns successfully" May 15 10:04:17.043515 kubelet[2175]: I0515 10:04:17.043339 2175 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 10:04:17.065382 kubelet[2175]: I0515 10:04:17.065199 2175 topology_manager.go:215] "Topology Admit Handler" podUID="758c4fc6-bd17-4106-9b69-e0be86e8f7ec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hvpck" May 15 10:04:17.073010 kubelet[2175]: I0515 10:04:17.072971 2175 topology_manager.go:215] "Topology Admit Handler" podUID="88dafbb3-d639-45d5-84de-121d32dd7eac" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wqlvc" May 15 10:04:17.235120 kubelet[2175]: I0515 10:04:17.235010 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpkzh\" (UniqueName: \"kubernetes.io/projected/88dafbb3-d639-45d5-84de-121d32dd7eac-kube-api-access-wpkzh\") pod \"coredns-7db6d8ff4d-wqlvc\" (UID: \"88dafbb3-d639-45d5-84de-121d32dd7eac\") " pod="kube-system/coredns-7db6d8ff4d-wqlvc" May 15 10:04:17.235336 kubelet[2175]: I0515 10:04:17.235315 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88dafbb3-d639-45d5-84de-121d32dd7eac-config-volume\") pod \"coredns-7db6d8ff4d-wqlvc\" (UID: \"88dafbb3-d639-45d5-84de-121d32dd7eac\") " pod="kube-system/coredns-7db6d8ff4d-wqlvc" May 15 10:04:17.235454 kubelet[2175]: I0515 10:04:17.235437 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9kzt\" (UniqueName: \"kubernetes.io/projected/758c4fc6-bd17-4106-9b69-e0be86e8f7ec-kube-api-access-j9kzt\") pod \"coredns-7db6d8ff4d-hvpck\" (UID: \"758c4fc6-bd17-4106-9b69-e0be86e8f7ec\") " pod="kube-system/coredns-7db6d8ff4d-hvpck" May 15 10:04:17.235560 kubelet[2175]: I0515 10:04:17.235547 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/758c4fc6-bd17-4106-9b69-e0be86e8f7ec-config-volume\") pod \"coredns-7db6d8ff4d-hvpck\" (UID: \"758c4fc6-bd17-4106-9b69-e0be86e8f7ec\") " pod="kube-system/coredns-7db6d8ff4d-hvpck" May 15 10:04:17.276244 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 15 10:04:17.372706 kubelet[2175]: E0515 10:04:17.372668 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:17.373448 env[1316]: time="2025-05-15T10:04:17.373397661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hvpck,Uid:758c4fc6-bd17-4106-9b69-e0be86e8f7ec,Namespace:kube-system,Attempt:0,}" May 15 10:04:17.384179 kubelet[2175]: E0515 10:04:17.384151 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:17.384929 env[1316]: time="2025-05-15T10:04:17.384890325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqlvc,Uid:88dafbb3-d639-45d5-84de-121d32dd7eac,Namespace:kube-system,Attempt:0,}" May 15 10:04:17.515326 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:37810.service. May 15 10:04:17.571287 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 15 10:04:17.576667 sshd[2968]: Accepted publickey for core from 10.0.0.1 port 37810 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:17.578449 sshd[2968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:17.582371 systemd-logind[1306]: New session 6 of user core. May 15 10:04:17.583235 systemd[1]: Started session-6.scope. May 15 10:04:17.711135 sshd[2968]: pam_unix(sshd:session): session closed for user core May 15 10:04:17.713835 systemd-logind[1306]: Session 6 logged out. Waiting for processes to exit. May 15 10:04:17.714061 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:37810.service: Deactivated successfully. May 15 10:04:17.714891 systemd[1]: session-6.scope: Deactivated successfully. May 15 10:04:17.715319 systemd-logind[1306]: Removed session 6. May 15 10:04:17.855367 kubelet[2175]: E0515 10:04:17.855237 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:17.869275 kubelet[2175]: I0515 10:04:17.869222 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vwz6w" podStartSLOduration=5.400682287 podStartE2EDuration="11.869192165s" podCreationTimestamp="2025-05-15 10:04:06 +0000 UTC" firstStartedPulling="2025-05-15 10:04:06.927035646 +0000 UTC m=+15.246977313" lastFinishedPulling="2025-05-15 10:04:13.395545524 +0000 UTC m=+21.715487191" observedRunningTime="2025-05-15 10:04:17.868960529 +0000 UTC m=+26.188902156" watchObservedRunningTime="2025-05-15 10:04:17.869192165 +0000 UTC m=+26.189133832" May 15 10:04:18.856684 kubelet[2175]: E0515 10:04:18.856651 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:19.195791 systemd-networkd[1101]: cilium_host: Link UP May 15 10:04:19.195919 systemd-networkd[1101]: cilium_net: Link UP May 15 10:04:19.197941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 10:04:19.198013 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 10:04:19.198137 systemd-networkd[1101]: cilium_net: Gained carrier May 15 10:04:19.198350 systemd-networkd[1101]: cilium_host: Gained carrier May 15 10:04:19.198462 systemd-networkd[1101]: cilium_net: Gained IPv6LL May 15 10:04:19.198573 systemd-networkd[1101]: cilium_host: Gained IPv6LL May 15 10:04:19.295526 systemd-networkd[1101]: cilium_vxlan: Link UP May 15 10:04:19.295532 systemd-networkd[1101]: cilium_vxlan: Gained carrier May 15 10:04:19.611233 kernel: NET: Registered PF_ALG protocol family May 15 10:04:19.858020 kubelet[2175]: E0515 10:04:19.857981 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:20.207250 systemd-networkd[1101]: lxc_health: Link UP May 15 10:04:20.218179 systemd-networkd[1101]: lxc_health: Gained carrier May 15 10:04:20.218344 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:04:20.481736 systemd-networkd[1101]: lxc5687c4b090c7: Link UP May 15 10:04:20.490243 kernel: eth0: renamed from tmp1b10e May 15 10:04:20.498366 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5687c4b090c7: link becomes ready May 15 10:04:20.497975 systemd-networkd[1101]: lxc5687c4b090c7: Gained carrier May 15 10:04:20.506689 systemd-networkd[1101]: lxc5aa939f20bec: Link UP May 15 10:04:20.517273 kernel: eth0: renamed from tmp9c794 May 15 10:04:20.522736 systemd-networkd[1101]: lxc5aa939f20bec: Gained carrier May 15 10:04:20.523225 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5aa939f20bec: link becomes ready May 15 10:04:20.869342 kubelet[2175]: E0515 10:04:20.869289 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:21.278355 systemd-networkd[1101]: cilium_vxlan: Gained IPv6LL May 15 10:04:21.663350 systemd-networkd[1101]: lxc_health: Gained IPv6LL May 15 10:04:21.871587 kubelet[2175]: E0515 10:04:21.871538 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:21.983363 systemd-networkd[1101]: lxc5687c4b090c7: Gained IPv6LL May 15 10:04:22.239453 systemd-networkd[1101]: lxc5aa939f20bec: Gained IPv6LL May 15 10:04:22.714014 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:33116.service. May 15 10:04:22.754828 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 33116 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:22.756433 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:22.760879 systemd-logind[1306]: New session 7 of user core. May 15 10:04:22.761472 systemd[1]: Started session-7.scope. May 15 10:04:22.869086 kubelet[2175]: E0515 10:04:22.869038 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:22.892515 sshd[3392]: pam_unix(sshd:session): session closed for user core May 15 10:04:22.894854 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:33116.service: Deactivated successfully. May 15 10:04:22.895947 systemd[1]: session-7.scope: Deactivated successfully. May 15 10:04:22.895964 systemd-logind[1306]: Session 7 logged out. Waiting for processes to exit. May 15 10:04:22.897057 systemd-logind[1306]: Removed session 7. May 15 10:04:24.216218 env[1316]: time="2025-05-15T10:04:24.216138754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:04:24.216218 env[1316]: time="2025-05-15T10:04:24.216177193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:04:24.216218 env[1316]: time="2025-05-15T10:04:24.216187713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:04:24.216749 env[1316]: time="2025-05-15T10:04:24.216719907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b10e55cb0e53d5245e6c58a4d3263db13f90f1593d2919b1ea18714bdded185 pid=3426 runtime=io.containerd.runc.v2 May 15 10:04:24.251476 env[1316]: time="2025-05-15T10:04:24.251372256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:04:24.251621 env[1316]: time="2025-05-15T10:04:24.251491614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:04:24.251621 env[1316]: time="2025-05-15T10:04:24.251519334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:04:24.251972 env[1316]: time="2025-05-15T10:04:24.251917969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c794b34c3f3cc33fbb563f581bf7498db96ae2bdee40f5ca2cb87b703e37e74 pid=3456 runtime=io.containerd.runc.v2 May 15 10:04:24.281350 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:04:24.307455 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:04:24.307613 env[1316]: time="2025-05-15T10:04:24.307547389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hvpck,Uid:758c4fc6-bd17-4106-9b69-e0be86e8f7ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b10e55cb0e53d5245e6c58a4d3263db13f90f1593d2919b1ea18714bdded185\"" May 15 10:04:24.308102 kubelet[2175]: E0515 10:04:24.308082 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:24.310384 env[1316]: time="2025-05-15T10:04:24.310356156Z" level=info msg="CreateContainer within sandbox \"1b10e55cb0e53d5245e6c58a4d3263db13f90f1593d2919b1ea18714bdded185\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:04:24.326731 env[1316]: time="2025-05-15T10:04:24.326694322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqlvc,Uid:88dafbb3-d639-45d5-84de-121d32dd7eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c794b34c3f3cc33fbb563f581bf7498db96ae2bdee40f5ca2cb87b703e37e74\"" May 15 10:04:24.327262 kubelet[2175]: E0515 10:04:24.327236 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:24.329153 env[1316]: time="2025-05-15T10:04:24.329119253Z" level=info msg="CreateContainer within sandbox \"9c794b34c3f3cc33fbb563f581bf7498db96ae2bdee40f5ca2cb87b703e37e74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:04:24.391654 env[1316]: time="2025-05-15T10:04:24.391600791Z" level=info msg="CreateContainer within sandbox \"1b10e55cb0e53d5245e6c58a4d3263db13f90f1593d2919b1ea18714bdded185\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"706ae77c079acce7e5f8040ad75a728c1f4cc90c45bb400c1e869d94ec09c582\"" May 15 10:04:24.392381 env[1316]: time="2025-05-15T10:04:24.392352102Z" level=info msg="StartContainer for \"706ae77c079acce7e5f8040ad75a728c1f4cc90c45bb400c1e869d94ec09c582\"" May 15 10:04:24.400410 env[1316]: time="2025-05-15T10:04:24.400359167Z" level=info msg="CreateContainer within sandbox \"9c794b34c3f3cc33fbb563f581bf7498db96ae2bdee40f5ca2cb87b703e37e74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"263f9f2baf1eba16bf95b5956cd1754f5260574b0001e906f21dbb7a45e4562f\"" May 15 10:04:24.400938 env[1316]: time="2025-05-15T10:04:24.400871881Z" level=info msg="StartContainer for \"263f9f2baf1eba16bf95b5956cd1754f5260574b0001e906f21dbb7a45e4562f\"" May 15 10:04:24.459679 env[1316]: time="2025-05-15T10:04:24.459616024Z" level=info msg="StartContainer for \"263f9f2baf1eba16bf95b5956cd1754f5260574b0001e906f21dbb7a45e4562f\" returns successfully" May 15 10:04:24.477990 env[1316]: time="2025-05-15T10:04:24.477885327Z" level=info msg="StartContainer for \"706ae77c079acce7e5f8040ad75a728c1f4cc90c45bb400c1e869d94ec09c582\" returns successfully" May 15 10:04:24.873174 kubelet[2175]: E0515 10:04:24.873131 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:24.875426 kubelet[2175]: E0515 10:04:24.875397 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:24.886236 kubelet[2175]: I0515 10:04:24.886170 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wqlvc" podStartSLOduration=18.886156842 podStartE2EDuration="18.886156842s" podCreationTimestamp="2025-05-15 10:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:04:24.885767207 +0000 UTC m=+33.205708874" watchObservedRunningTime="2025-05-15 10:04:24.886156842 +0000 UTC m=+33.206098509" May 15 10:04:24.906475 kubelet[2175]: I0515 10:04:24.906415 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hvpck" podStartSLOduration=18.906400362 podStartE2EDuration="18.906400362s" podCreationTimestamp="2025-05-15 10:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:04:24.895985846 +0000 UTC m=+33.215927553" watchObservedRunningTime="2025-05-15 10:04:24.906400362 +0000 UTC m=+33.226342029" May 15 10:04:25.877830 kubelet[2175]: E0515 10:04:25.877796 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:25.878274 kubelet[2175]: E0515 10:04:25.877870 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:26.879017 kubelet[2175]: E0515 10:04:26.878986 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:26.879742 kubelet[2175]: E0515 10:04:26.879721 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:04:27.895948 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:33122.service. May 15 10:04:27.935238 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 33122 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:27.935033 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:27.941045 systemd-logind[1306]: New session 8 of user core. May 15 10:04:27.942041 systemd[1]: Started session-8.scope. May 15 10:04:28.056434 sshd[3583]: pam_unix(sshd:session): session closed for user core May 15 10:04:28.059315 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:33122.service: Deactivated successfully. May 15 10:04:28.060583 systemd[1]: session-8.scope: Deactivated successfully. May 15 10:04:28.061019 systemd-logind[1306]: Session 8 logged out. Waiting for processes to exit. May 15 10:04:28.062175 systemd-logind[1306]: Removed session 8. May 15 10:04:33.058949 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:55586.service. May 15 10:04:33.105781 sshd[3598]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:33.107101 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:33.111309 systemd-logind[1306]: New session 9 of user core. May 15 10:04:33.111838 systemd[1]: Started session-9.scope. May 15 10:04:33.226324 sshd[3598]: pam_unix(sshd:session): session closed for user core May 15 10:04:33.229154 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:55588.service. May 15 10:04:33.229808 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:55586.service: Deactivated successfully. May 15 10:04:33.230863 systemd-logind[1306]: Session 9 logged out. Waiting for processes to exit. May 15 10:04:33.230928 systemd[1]: session-9.scope: Deactivated successfully. May 15 10:04:33.232122 systemd-logind[1306]: Removed session 9. May 15 10:04:33.264584 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 55588 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:33.265805 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:33.270103 systemd-logind[1306]: New session 10 of user core. May 15 10:04:33.271015 systemd[1]: Started session-10.scope. May 15 10:04:33.423321 sshd[3613]: pam_unix(sshd:session): session closed for user core May 15 10:04:33.426643 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:55592.service. May 15 10:04:33.441556 systemd-logind[1306]: Session 10 logged out. Waiting for processes to exit. May 15 10:04:33.441756 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:55588.service: Deactivated successfully. May 15 10:04:33.442714 systemd[1]: session-10.scope: Deactivated successfully. May 15 10:04:33.443193 systemd-logind[1306]: Removed session 10. May 15 10:04:33.470456 sshd[3624]: Accepted publickey for core from 10.0.0.1 port 55592 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:33.471632 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:33.475173 systemd-logind[1306]: New session 11 of user core. May 15 10:04:33.476031 systemd[1]: Started session-11.scope. May 15 10:04:33.584637 sshd[3624]: pam_unix(sshd:session): session closed for user core May 15 10:04:33.587099 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:55592.service: Deactivated successfully. May 15 10:04:33.588269 systemd-logind[1306]: Session 11 logged out. Waiting for processes to exit. May 15 10:04:33.588450 systemd[1]: session-11.scope: Deactivated successfully. May 15 10:04:33.589309 systemd-logind[1306]: Removed session 11. May 15 10:04:38.588345 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:55604.service. May 15 10:04:38.621911 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 55604 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:38.623190 sshd[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:38.626884 systemd-logind[1306]: New session 12 of user core. May 15 10:04:38.627793 systemd[1]: Started session-12.scope. May 15 10:04:38.751241 sshd[3644]: pam_unix(sshd:session): session closed for user core May 15 10:04:38.753685 systemd-logind[1306]: Session 12 logged out. Waiting for processes to exit. May 15 10:04:38.753917 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:55604.service: Deactivated successfully. May 15 10:04:38.754755 systemd[1]: session-12.scope: Deactivated successfully. May 15 10:04:38.755184 systemd-logind[1306]: Removed session 12. May 15 10:04:43.754542 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:52580.service. May 15 10:04:43.788188 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 52580 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:43.789601 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:43.794022 systemd-logind[1306]: New session 13 of user core. May 15 10:04:43.794345 systemd[1]: Started session-13.scope. May 15 10:04:43.901022 sshd[3658]: pam_unix(sshd:session): session closed for user core May 15 10:04:43.903604 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:52588.service. May 15 10:04:43.904726 systemd-logind[1306]: Session 13 logged out. Waiting for processes to exit. May 15 10:04:43.904938 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:52580.service: Deactivated successfully. May 15 10:04:43.905789 systemd[1]: session-13.scope: Deactivated successfully. May 15 10:04:43.906270 systemd-logind[1306]: Removed session 13. May 15 10:04:43.938601 sshd[3671]: Accepted publickey for core from 10.0.0.1 port 52588 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:43.939843 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:43.944331 systemd[1]: Started session-14.scope. May 15 10:04:43.944511 systemd-logind[1306]: New session 14 of user core. May 15 10:04:44.190242 sshd[3671]: pam_unix(sshd:session): session closed for user core May 15 10:04:44.192467 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:52600.service. May 15 10:04:44.194859 systemd-logind[1306]: Session 14 logged out. Waiting for processes to exit. May 15 10:04:44.195043 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:52588.service: Deactivated successfully. May 15 10:04:44.195865 systemd[1]: session-14.scope: Deactivated successfully. May 15 10:04:44.196327 systemd-logind[1306]: Removed session 14. May 15 10:04:44.230038 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 52600 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:44.231682 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:44.235118 systemd-logind[1306]: New session 15 of user core. May 15 10:04:44.236056 systemd[1]: Started session-15.scope. May 15 10:04:45.558121 sshd[3683]: pam_unix(sshd:session): session closed for user core May 15 10:04:45.564302 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:52602.service. May 15 10:04:45.564880 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:52600.service: Deactivated successfully. May 15 10:04:45.567085 systemd[1]: session-15.scope: Deactivated successfully. May 15 10:04:45.567131 systemd-logind[1306]: Session 15 logged out. Waiting for processes to exit. May 15 10:04:45.568909 systemd-logind[1306]: Removed session 15. May 15 10:04:45.601038 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 52602 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:45.602400 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:45.605809 systemd-logind[1306]: New session 16 of user core. May 15 10:04:45.606725 systemd[1]: Started session-16.scope. May 15 10:04:45.827833 sshd[3703]: pam_unix(sshd:session): session closed for user core May 15 10:04:45.833413 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:52616.service. May 15 10:04:45.834973 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:52602.service: Deactivated successfully. May 15 10:04:45.836389 systemd[1]: session-16.scope: Deactivated successfully. May 15 10:04:45.836902 systemd-logind[1306]: Session 16 logged out. Waiting for processes to exit. May 15 10:04:45.837955 systemd-logind[1306]: Removed session 16. May 15 10:04:45.874159 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 52616 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:45.875702 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:45.879563 systemd-logind[1306]: New session 17 of user core. May 15 10:04:45.880832 systemd[1]: Started session-17.scope. May 15 10:04:46.009953 sshd[3717]: pam_unix(sshd:session): session closed for user core May 15 10:04:46.012397 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:52616.service: Deactivated successfully. May 15 10:04:46.013447 systemd-logind[1306]: Session 17 logged out. Waiting for processes to exit. May 15 10:04:46.013451 systemd[1]: session-17.scope: Deactivated successfully. May 15 10:04:46.014566 systemd-logind[1306]: Removed session 17. May 15 10:04:51.014084 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:52632.service. May 15 10:04:51.049795 sshd[3737]: Accepted publickey for core from 10.0.0.1 port 52632 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:51.051136 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:51.058295 systemd-logind[1306]: New session 18 of user core. May 15 10:04:51.059317 systemd[1]: Started session-18.scope. May 15 10:04:51.177615 sshd[3737]: pam_unix(sshd:session): session closed for user core May 15 10:04:51.181359 systemd-logind[1306]: Session 18 logged out. Waiting for processes to exit. May 15 10:04:51.181850 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:52632.service: Deactivated successfully. May 15 10:04:51.182775 systemd[1]: session-18.scope: Deactivated successfully. May 15 10:04:51.183395 systemd-logind[1306]: Removed session 18. May 15 10:04:56.180816 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:49322.service. May 15 10:04:56.215211 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 49322 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:04:56.216686 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:04:56.221770 systemd-logind[1306]: New session 19 of user core. May 15 10:04:56.222184 systemd[1]: Started session-19.scope. May 15 10:04:56.335513 sshd[3754]: pam_unix(sshd:session): session closed for user core May 15 10:04:56.337974 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:49322.service: Deactivated successfully. May 15 10:04:56.339039 systemd[1]: session-19.scope: Deactivated successfully. May 15 10:04:56.339050 systemd-logind[1306]: Session 19 logged out. Waiting for processes to exit. May 15 10:04:56.339994 systemd-logind[1306]: Removed session 19. May 15 10:05:01.338804 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:49326.service. May 15 10:05:01.378903 sshd[3768]: Accepted publickey for core from 10.0.0.1 port 49326 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:05:01.380427 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:05:01.384905 systemd-logind[1306]: New session 20 of user core. May 15 10:05:01.385418 systemd[1]: Started session-20.scope. May 15 10:05:01.512388 sshd[3768]: pam_unix(sshd:session): session closed for user core May 15 10:05:01.515665 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:49326.service: Deactivated successfully. May 15 10:05:01.516838 systemd-logind[1306]: Session 20 logged out. Waiting for processes to exit. May 15 10:05:01.516860 systemd[1]: session-20.scope: Deactivated successfully. May 15 10:05:01.517818 systemd-logind[1306]: Removed session 20. May 15 10:05:06.520412 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:57770.service. May 15 10:05:06.566785 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 57770 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:05:06.568227 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:05:06.572884 systemd-logind[1306]: New session 21 of user core. May 15 10:05:06.573668 systemd[1]: Started session-21.scope. May 15 10:05:06.698632 sshd[3783]: pam_unix(sshd:session): session closed for user core May 15 10:05:06.701534 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:57784.service. May 15 10:05:06.702926 systemd-logind[1306]: Session 21 logged out. Waiting for processes to exit. May 15 10:05:06.703058 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:57770.service: Deactivated successfully. May 15 10:05:06.703981 systemd[1]: session-21.scope: Deactivated successfully. May 15 10:05:06.704502 systemd-logind[1306]: Removed session 21. May 15 10:05:06.735849 sshd[3795]: Accepted publickey for core from 10.0.0.1 port 57784 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:05:06.737225 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:05:06.740986 systemd-logind[1306]: New session 22 of user core. May 15 10:05:06.742064 systemd[1]: Started session-22.scope. May 15 10:05:08.631225 env[1316]: time="2025-05-15T10:05:08.631154276Z" level=info msg="StopContainer for \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\" with timeout 30 (s)" May 15 10:05:08.634080 env[1316]: time="2025-05-15T10:05:08.634026459Z" level=info msg="Stop container \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\" with signal terminated" May 15 10:05:08.677919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b-rootfs.mount: Deactivated successfully. May 15 10:05:08.696625 env[1316]: time="2025-05-15T10:05:08.696568962Z" level=info msg="shim disconnected" id=b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b May 15 10:05:08.696625 env[1316]: time="2025-05-15T10:05:08.696620002Z" level=warning msg="cleaning up after shim disconnected" id=b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b namespace=k8s.io May 15 10:05:08.696625 env[1316]: time="2025-05-15T10:05:08.696630042Z" level=info msg="cleaning up dead shim" May 15 10:05:08.702982 env[1316]: time="2025-05-15T10:05:08.702911493Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:05:08.706344 env[1316]: time="2025-05-15T10:05:08.705854636Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3845 runtime=io.containerd.runc.v2\n" May 15 10:05:08.708582 env[1316]: time="2025-05-15T10:05:08.708369257Z" level=info msg="StopContainer for \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\" returns successfully" May 15 10:05:08.708900 env[1316]: time="2025-05-15T10:05:08.708874181Z" level=info msg="StopContainer for \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\" with timeout 2 (s)" May 15 10:05:08.708968 env[1316]: time="2025-05-15T10:05:08.708910501Z" level=info msg="StopPodSandbox for \"3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631\"" May 15 10:05:08.709002 env[1316]: time="2025-05-15T10:05:08.708978261Z" level=info msg="Container to stop \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:05:08.709310 env[1316]: time="2025-05-15T10:05:08.709287704Z" level=info msg="Stop container \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\" with signal terminated" May 15 10:05:08.712104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631-shm.mount: Deactivated successfully. May 15 10:05:08.715702 systemd-networkd[1101]: lxc_health: Link DOWN May 15 10:05:08.715710 systemd-networkd[1101]: lxc_health: Lost carrier May 15 10:05:08.739249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631-rootfs.mount: Deactivated successfully. May 15 10:05:08.744824 env[1316]: time="2025-05-15T10:05:08.744761509Z" level=info msg="shim disconnected" id=3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631 May 15 10:05:08.744824 env[1316]: time="2025-05-15T10:05:08.744822509Z" level=warning msg="cleaning up after shim disconnected" id=3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631 namespace=k8s.io May 15 10:05:08.745027 env[1316]: time="2025-05-15T10:05:08.744834669Z" level=info msg="cleaning up dead shim" May 15 10:05:08.754470 env[1316]: time="2025-05-15T10:05:08.754406866Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3890 runtime=io.containerd.runc.v2\n" May 15 10:05:08.754831 env[1316]: time="2025-05-15T10:05:08.754788989Z" level=info msg="TearDown network for sandbox \"3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631\" successfully" May 15 10:05:08.754901 env[1316]: time="2025-05-15T10:05:08.754831590Z" level=info msg="StopPodSandbox for \"3e4737135c8ec265903d4a4db8c707c8bd488070090999fe1bf2d9242cbf4631\" returns successfully" May 15 10:05:08.768124 env[1316]: time="2025-05-15T10:05:08.768072696Z" level=info msg="shim disconnected" id=fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49 May 15 10:05:08.768124 env[1316]: time="2025-05-15T10:05:08.768121096Z" level=warning msg="cleaning up after shim disconnected" id=fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49 namespace=k8s.io May 15 10:05:08.768124 env[1316]: time="2025-05-15T10:05:08.768132977Z" level=info msg="cleaning up dead shim" May 15 10:05:08.776736 env[1316]: time="2025-05-15T10:05:08.776684405Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3917 runtime=io.containerd.runc.v2\n" May 15 10:05:08.778925 env[1316]: time="2025-05-15T10:05:08.778878863Z" level=info msg="StopContainer for \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\" returns successfully" May 15 10:05:08.779455 env[1316]: time="2025-05-15T10:05:08.779412987Z" level=info msg="StopPodSandbox for \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\"" May 15 10:05:08.779535 env[1316]: time="2025-05-15T10:05:08.779482988Z" level=info msg="Container to stop \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:05:08.779535 env[1316]: time="2025-05-15T10:05:08.779497708Z" level=info msg="Container to stop \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:05:08.779535 env[1316]: time="2025-05-15T10:05:08.779508908Z" level=info msg="Container to stop \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:05:08.779535 env[1316]: time="2025-05-15T10:05:08.779520708Z" level=info msg="Container to stop \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:05:08.779535 env[1316]: time="2025-05-15T10:05:08.779531708Z" level=info msg="Container to stop \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:05:08.802594 env[1316]: time="2025-05-15T10:05:08.802548813Z" level=info msg="shim disconnected" id=7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb May 15 10:05:08.803295 env[1316]: time="2025-05-15T10:05:08.803260139Z" level=warning msg="cleaning up after shim disconnected" id=7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb namespace=k8s.io May 15 10:05:08.803295 env[1316]: time="2025-05-15T10:05:08.803286059Z" level=info msg="cleaning up dead shim" May 15 10:05:08.811081 env[1316]: time="2025-05-15T10:05:08.811016761Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3950 runtime=io.containerd.runc.v2\n" May 15 10:05:08.811410 env[1316]: time="2025-05-15T10:05:08.811380764Z" level=info msg="TearDown network for sandbox \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" successfully" May 15 10:05:08.811477 env[1316]: time="2025-05-15T10:05:08.811415524Z" level=info msg="StopPodSandbox for \"7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb\" returns successfully" May 15 10:05:08.936171 kubelet[2175]: I0515 10:05:08.935321 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bc94d72-24a7-49e3-8072-97a476a47a0a-cilium-config-path\") pod \"0bc94d72-24a7-49e3-8072-97a476a47a0a\" (UID: \"0bc94d72-24a7-49e3-8072-97a476a47a0a\") " May 15 10:05:08.936171 kubelet[2175]: I0515 10:05:08.935372 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-lib-modules\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936171 kubelet[2175]: I0515 10:05:08.935394 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-cgroup\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936171 kubelet[2175]: I0515 10:05:08.935423 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-config-path\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936171 kubelet[2175]: I0515 10:05:08.935446 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbshp\" (UniqueName: \"kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-kube-api-access-zbshp\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936171 kubelet[2175]: I0515 10:05:08.935461 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cni-path\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936663 kubelet[2175]: I0515 10:05:08.935476 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-kernel\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936663 kubelet[2175]: I0515 10:05:08.935500 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c0a55a2-89a2-4518-a008-625c4c63b850-clustermesh-secrets\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936663 kubelet[2175]: I0515 10:05:08.935515 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-net\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936663 kubelet[2175]: I0515 10:05:08.935536 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-hubble-tls\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936663 kubelet[2175]: I0515 10:05:08.935550 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-run\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936663 kubelet[2175]: I0515 10:05:08.935619 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-hostproc\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936802 kubelet[2175]: I0515 10:05:08.935633 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-bpf-maps\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936802 kubelet[2175]: I0515 10:05:08.935651 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzkp\" (UniqueName: \"kubernetes.io/projected/0bc94d72-24a7-49e3-8072-97a476a47a0a-kube-api-access-vxzkp\") pod \"0bc94d72-24a7-49e3-8072-97a476a47a0a\" (UID: \"0bc94d72-24a7-49e3-8072-97a476a47a0a\") " May 15 10:05:08.936802 kubelet[2175]: I0515 10:05:08.935677 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-xtables-lock\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.936802 kubelet[2175]: I0515 10:05:08.935693 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-etc-cni-netd\") pod \"3c0a55a2-89a2-4518-a008-625c4c63b850\" (UID: \"3c0a55a2-89a2-4518-a008-625c4c63b850\") " May 15 10:05:08.938755 kubelet[2175]: I0515 10:05:08.938711 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.938814 kubelet[2175]: I0515 10:05:08.938714 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.938841 kubelet[2175]: I0515 10:05:08.938816 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.942651 kubelet[2175]: I0515 10:05:08.942599 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:05:08.942814 kubelet[2175]: I0515 10:05:08.942674 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.942814 kubelet[2175]: I0515 10:05:08.942694 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.942814 kubelet[2175]: I0515 10:05:08.942712 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.942814 kubelet[2175]: I0515 10:05:08.942728 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.943542 kubelet[2175]: I0515 10:05:08.943504 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-kube-api-access-zbshp" (OuterVolumeSpecName: "kube-api-access-zbshp") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "kube-api-access-zbshp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:05:08.943597 kubelet[2175]: I0515 10:05:08.943569 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.944078 kubelet[2175]: I0515 10:05:08.944043 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0a55a2-89a2-4518-a008-625c4c63b850-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:05:08.944137 kubelet[2175]: I0515 10:05:08.944095 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.944137 kubelet[2175]: I0515 10:05:08.944124 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:08.945913 kubelet[2175]: I0515 10:05:08.945877 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc94d72-24a7-49e3-8072-97a476a47a0a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0bc94d72-24a7-49e3-8072-97a476a47a0a" (UID: "0bc94d72-24a7-49e3-8072-97a476a47a0a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:05:08.946031 kubelet[2175]: I0515 10:05:08.945941 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c0a55a2-89a2-4518-a008-625c4c63b850" (UID: "3c0a55a2-89a2-4518-a008-625c4c63b850"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:05:08.946328 kubelet[2175]: I0515 10:05:08.946287 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc94d72-24a7-49e3-8072-97a476a47a0a-kube-api-access-vxzkp" (OuterVolumeSpecName: "kube-api-access-vxzkp") pod "0bc94d72-24a7-49e3-8072-97a476a47a0a" (UID: "0bc94d72-24a7-49e3-8072-97a476a47a0a"). InnerVolumeSpecName "kube-api-access-vxzkp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:05:08.962953 kubelet[2175]: I0515 10:05:08.962895 2175 scope.go:117] "RemoveContainer" containerID="b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b" May 15 10:05:08.966599 env[1316]: time="2025-05-15T10:05:08.966560370Z" level=info msg="RemoveContainer for \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\"" May 15 10:05:08.970012 env[1316]: time="2025-05-15T10:05:08.969961118Z" level=info msg="RemoveContainer for \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\" returns successfully" May 15 10:05:08.970377 kubelet[2175]: I0515 10:05:08.970344 2175 scope.go:117] "RemoveContainer" containerID="b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b" May 15 10:05:08.970817 env[1316]: time="2025-05-15T10:05:08.970730804Z" level=error msg="ContainerStatus for \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\": not found" May 15 10:05:08.971077 kubelet[2175]: E0515 10:05:08.971030 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\": not found" containerID="b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b" May 15 10:05:08.971164 kubelet[2175]: I0515 10:05:08.971081 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b"} err="failed to get container status \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b43c3a80c60d34cba8203db3d93b86d8a47bcbae2e6659c400746469e122639b\": not found" May 15 10:05:08.971164 kubelet[2175]: I0515 10:05:08.971162 2175 scope.go:117] "RemoveContainer" containerID="fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49" May 15 10:05:08.972697 env[1316]: time="2025-05-15T10:05:08.972665019Z" level=info msg="RemoveContainer for \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\"" May 15 10:05:08.976620 env[1316]: time="2025-05-15T10:05:08.976581411Z" level=info msg="RemoveContainer for \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\" returns successfully" May 15 10:05:08.976805 kubelet[2175]: I0515 10:05:08.976780 2175 scope.go:117] "RemoveContainer" containerID="c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73" May 15 10:05:08.981518 env[1316]: time="2025-05-15T10:05:08.981474050Z" level=info msg="RemoveContainer for \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\"" May 15 10:05:08.984163 env[1316]: time="2025-05-15T10:05:08.984079511Z" level=info msg="RemoveContainer for \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\" returns successfully" May 15 10:05:08.984461 kubelet[2175]: I0515 10:05:08.984437 2175 scope.go:117] "RemoveContainer" containerID="e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c" May 15 10:05:08.987017 env[1316]: time="2025-05-15T10:05:08.986982534Z" level=info msg="RemoveContainer for \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\"" May 15 10:05:08.989452 env[1316]: time="2025-05-15T10:05:08.989420194Z" level=info msg="RemoveContainer for \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\" returns successfully" May 15 10:05:08.989711 kubelet[2175]: I0515 10:05:08.989686 2175 scope.go:117] "RemoveContainer" containerID="78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98" May 15 10:05:08.990747 env[1316]: time="2025-05-15T10:05:08.990717724Z" level=info msg="RemoveContainer for \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\"" May 15 10:05:08.993174 env[1316]: time="2025-05-15T10:05:08.993143344Z" level=info msg="RemoveContainer for \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\" returns successfully" May 15 10:05:08.993364 kubelet[2175]: I0515 10:05:08.993342 2175 scope.go:117] "RemoveContainer" containerID="c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a" May 15 10:05:08.994354 env[1316]: time="2025-05-15T10:05:08.994327393Z" level=info msg="RemoveContainer for \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\"" May 15 10:05:08.996870 env[1316]: time="2025-05-15T10:05:08.996834174Z" level=info msg="RemoveContainer for \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\" returns successfully" May 15 10:05:08.997050 kubelet[2175]: I0515 10:05:08.997025 2175 scope.go:117] "RemoveContainer" containerID="fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49" May 15 10:05:08.997395 env[1316]: time="2025-05-15T10:05:08.997334378Z" level=error msg="ContainerStatus for \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\": not found" May 15 10:05:08.997547 kubelet[2175]: E0515 10:05:08.997520 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\": not found" containerID="fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49" May 15 10:05:08.997586 kubelet[2175]: I0515 10:05:08.997557 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49"} err="failed to get container status \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49\": not found" May 15 10:05:08.997586 kubelet[2175]: I0515 10:05:08.997581 2175 scope.go:117] "RemoveContainer" containerID="c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73" May 15 10:05:08.997794 env[1316]: time="2025-05-15T10:05:08.997748221Z" level=error msg="ContainerStatus for \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\": not found" May 15 10:05:08.997891 kubelet[2175]: E0515 10:05:08.997872 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\": not found" containerID="c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73" May 15 10:05:08.997925 kubelet[2175]: I0515 10:05:08.997895 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73"} err="failed to get container status \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\": rpc error: code = NotFound desc = an error occurred when try to find container \"c12b5f34908c2c428da8d008b399f5956b4973f89253cea8f745603d7d86bb73\": not found" May 15 10:05:08.997925 kubelet[2175]: I0515 10:05:08.997908 2175 scope.go:117] "RemoveContainer" containerID="e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c" May 15 10:05:08.998132 env[1316]: time="2025-05-15T10:05:08.998080184Z" level=error msg="ContainerStatus for \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\": not found" May 15 10:05:08.998241 kubelet[2175]: E0515 10:05:08.998219 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\": not found" containerID="e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c" May 15 10:05:08.998282 kubelet[2175]: I0515 10:05:08.998248 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c"} err="failed to get container status \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e320afb2f878e525daa201e945547b468354b7ca212c468f51bac8390cd1304c\": not found" May 15 10:05:08.998282 kubelet[2175]: I0515 10:05:08.998266 2175 scope.go:117] "RemoveContainer" containerID="78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98" May 15 10:05:08.998444 env[1316]: time="2025-05-15T10:05:08.998401706Z" level=error msg="ContainerStatus for \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\": not found" May 15 10:05:08.998551 kubelet[2175]: E0515 10:05:08.998532 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\": not found" containerID="78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98" May 15 10:05:08.998587 kubelet[2175]: I0515 10:05:08.998557 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98"} err="failed to get container status \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\": rpc error: code = NotFound desc = an error occurred when try to find container \"78ce273856f4f871a797cb04c5f53f5d9fc2853b6c1a1d9fd9cf75c2b112da98\": not found" May 15 10:05:08.998587 kubelet[2175]: I0515 10:05:08.998572 2175 scope.go:117] "RemoveContainer" containerID="c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a" May 15 10:05:08.998813 env[1316]: time="2025-05-15T10:05:08.998771989Z" level=error msg="ContainerStatus for \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\": not found" May 15 10:05:08.998893 kubelet[2175]: E0515 10:05:08.998876 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\": not found" containerID="c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a" May 15 10:05:08.998931 kubelet[2175]: I0515 10:05:08.998899 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a"} err="failed to get container status \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c162f1c19b785b70c2150bab00e0d9e4b7c2eb38ab106ba92ff140faea29c77a\": not found" May 15 10:05:09.036505 kubelet[2175]: I0515 10:05:09.036459 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036505 kubelet[2175]: I0515 10:05:09.036494 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036505 kubelet[2175]: I0515 10:05:09.036509 2175 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c0a55a2-89a2-4518-a008-625c4c63b850-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036518 2175 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zbshp\" (UniqueName: \"kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-kube-api-access-zbshp\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036527 2175 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036536 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036544 2175 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c0a55a2-89a2-4518-a008-625c4c63b850-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036552 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036561 2175 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036572 2175 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vxzkp\" (UniqueName: \"kubernetes.io/projected/0bc94d72-24a7-49e3-8072-97a476a47a0a-kube-api-access-vxzkp\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036689 kubelet[2175]: I0515 10:05:09.036583 2175 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036891 kubelet[2175]: I0515 10:05:09.036593 2175 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036891 kubelet[2175]: I0515 10:05:09.036602 2175 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036891 kubelet[2175]: I0515 10:05:09.036610 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036891 kubelet[2175]: I0515 10:05:09.036618 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bc94d72-24a7-49e3-8072-97a476a47a0a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.036891 kubelet[2175]: I0515 10:05:09.036626 2175 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0a55a2-89a2-4518-a008-625c4c63b850-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:05:09.653422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdf37c0bd944d189846069541c5492d84992790b068cc9c7d4dd461fb4d23e49-rootfs.mount: Deactivated successfully. May 15 10:05:09.653577 systemd[1]: var-lib-kubelet-pods-0bc94d72\x2d24a7\x2d49e3\x2d8072\x2d97a476a47a0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxzkp.mount: Deactivated successfully. May 15 10:05:09.653673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb-rootfs.mount: Deactivated successfully. May 15 10:05:09.653755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7357dc33706e60127a47874fc221bc9cbc5f1944a5285a60931f068dfb40cbeb-shm.mount: Deactivated successfully. May 15 10:05:09.653840 systemd[1]: var-lib-kubelet-pods-3c0a55a2\x2d89a2\x2d4518\x2da008\x2d625c4c63b850-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzbshp.mount: Deactivated successfully. May 15 10:05:09.653924 systemd[1]: var-lib-kubelet-pods-3c0a55a2\x2d89a2\x2d4518\x2da008\x2d625c4c63b850-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:05:09.654010 systemd[1]: var-lib-kubelet-pods-3c0a55a2\x2d89a2\x2d4518\x2da008\x2d625c4c63b850-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:05:09.783831 kubelet[2175]: I0515 10:05:09.783779 2175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc94d72-24a7-49e3-8072-97a476a47a0a" path="/var/lib/kubelet/pods/0bc94d72-24a7-49e3-8072-97a476a47a0a/volumes" May 15 10:05:09.784232 kubelet[2175]: I0515 10:05:09.784193 2175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" path="/var/lib/kubelet/pods/3c0a55a2-89a2-4518-a008-625c4c63b850/volumes" May 15 10:05:10.560224 sshd[3795]: pam_unix(sshd:session): session closed for user core May 15 10:05:10.562735 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:57796.service. May 15 10:05:10.565860 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:57784.service: Deactivated successfully. May 15 10:05:10.567284 systemd[1]: session-22.scope: Deactivated successfully. May 15 10:05:10.567660 systemd-logind[1306]: Session 22 logged out. Waiting for processes to exit. May 15 10:05:10.568508 systemd-logind[1306]: Removed session 22. May 15 10:05:10.597661 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 57796 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:05:10.599067 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:05:10.603017 systemd-logind[1306]: New session 23 of user core. May 15 10:05:10.603574 systemd[1]: Started session-23.scope. May 15 10:05:11.451752 sshd[3967]: pam_unix(sshd:session): session closed for user core May 15 10:05:11.454247 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:57806.service. May 15 10:05:11.466394 kubelet[2175]: I0515 10:05:11.466351 2175 topology_manager.go:215] "Topology Admit Handler" podUID="7b96d4dc-9440-4f64-8fad-516b3eed570f" podNamespace="kube-system" podName="cilium-k2fv5" May 15 10:05:11.468009 kubelet[2175]: E0515 10:05:11.467981 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" containerName="mount-bpf-fs" May 15 10:05:11.468149 kubelet[2175]: E0515 10:05:11.468136 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" containerName="cilium-agent" May 15 10:05:11.468215 kubelet[2175]: E0515 10:05:11.468196 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" containerName="mount-cgroup" May 15 10:05:11.468294 kubelet[2175]: E0515 10:05:11.468283 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" containerName="apply-sysctl-overwrites" May 15 10:05:11.468358 kubelet[2175]: E0515 10:05:11.468349 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0bc94d72-24a7-49e3-8072-97a476a47a0a" containerName="cilium-operator" May 15 10:05:11.468410 kubelet[2175]: E0515 10:05:11.468401 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" containerName="clean-cilium-state" May 15 10:05:11.468497 kubelet[2175]: I0515 10:05:11.468485 2175 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0a55a2-89a2-4518-a008-625c4c63b850" containerName="cilium-agent" May 15 10:05:11.469495 kubelet[2175]: I0515 10:05:11.469475 2175 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc94d72-24a7-49e3-8072-97a476a47a0a" containerName="cilium-operator" May 15 10:05:11.484863 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:57796.service: Deactivated successfully. May 15 10:05:11.491518 systemd[1]: session-23.scope: Deactivated successfully. May 15 10:05:11.493360 systemd-logind[1306]: Session 23 logged out. Waiting for processes to exit. May 15 10:05:11.497759 systemd-logind[1306]: Removed session 23. May 15 10:05:11.527263 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 57806 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:05:11.528587 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:05:11.536269 systemd[1]: Started session-24.scope. May 15 10:05:11.537253 systemd-logind[1306]: New session 24 of user core. May 15 10:05:11.548953 kubelet[2175]: I0515 10:05:11.548902 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-kernel\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.548953 kubelet[2175]: I0515 10:05:11.548950 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-etc-cni-netd\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549152 kubelet[2175]: I0515 10:05:11.548972 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-cgroup\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549152 kubelet[2175]: I0515 10:05:11.548990 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-ipsec-secrets\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549152 kubelet[2175]: I0515 10:05:11.549007 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-bpf-maps\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549152 kubelet[2175]: I0515 10:05:11.549023 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-clustermesh-secrets\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549152 kubelet[2175]: I0515 10:05:11.549038 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-config-path\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549337 kubelet[2175]: I0515 10:05:11.549069 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-net\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549337 kubelet[2175]: I0515 10:05:11.549090 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gfnw\" (UniqueName: \"kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-kube-api-access-5gfnw\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549337 kubelet[2175]: I0515 10:05:11.549107 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-run\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549337 kubelet[2175]: I0515 10:05:11.549124 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cni-path\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549337 kubelet[2175]: I0515 10:05:11.549138 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-hostproc\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549337 kubelet[2175]: I0515 10:05:11.549153 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-xtables-lock\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549516 kubelet[2175]: I0515 10:05:11.549171 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-lib-modules\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.549516 kubelet[2175]: I0515 10:05:11.549189 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-hubble-tls\") pod \"cilium-k2fv5\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " pod="kube-system/cilium-k2fv5" May 15 10:05:11.757090 sshd[3980]: pam_unix(sshd:session): session closed for user core May 15 10:05:11.760684 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:57812.service. May 15 10:05:11.761305 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:57806.service: Deactivated successfully. May 15 10:05:11.762900 systemd-logind[1306]: Session 24 logged out. Waiting for processes to exit. May 15 10:05:11.762916 systemd[1]: session-24.scope: Deactivated successfully. May 15 10:05:11.763947 systemd-logind[1306]: Removed session 24. May 15 10:05:11.795974 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 57812 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:05:11.796915 kubelet[2175]: E0515 10:05:11.796884 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:11.800196 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:05:11.804496 env[1316]: time="2025-05-15T10:05:11.801723385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k2fv5,Uid:7b96d4dc-9440-4f64-8fad-516b3eed570f,Namespace:kube-system,Attempt:0,}" May 15 10:05:11.810252 systemd[1]: Started session-25.scope. May 15 10:05:11.810682 systemd-logind[1306]: New session 25 of user core. May 15 10:05:11.832151 env[1316]: time="2025-05-15T10:05:11.825163228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:05:11.832151 env[1316]: time="2025-05-15T10:05:11.825406630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:05:11.832151 env[1316]: time="2025-05-15T10:05:11.825432230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:05:11.832151 env[1316]: time="2025-05-15T10:05:11.825645632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f pid=4011 runtime=io.containerd.runc.v2 May 15 10:05:11.836876 kubelet[2175]: E0515 10:05:11.836834 2175 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:05:11.900262 env[1316]: time="2025-05-15T10:05:11.900192190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k2fv5,Uid:7b96d4dc-9440-4f64-8fad-516b3eed570f,Namespace:kube-system,Attempt:0,} returns sandbox id \"990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f\"" May 15 10:05:11.903244 kubelet[2175]: E0515 10:05:11.902484 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:11.906409 env[1316]: time="2025-05-15T10:05:11.906357633Z" level=info msg="CreateContainer within sandbox \"990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:05:11.918420 env[1316]: time="2025-05-15T10:05:11.918347517Z" level=info msg="CreateContainer within sandbox \"990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7\"" May 15 10:05:11.919760 env[1316]: time="2025-05-15T10:05:11.918893680Z" level=info msg="StartContainer for \"49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7\"" May 15 10:05:11.993922 env[1316]: time="2025-05-15T10:05:11.993550960Z" level=info msg="StartContainer for \"49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7\" returns successfully" May 15 10:05:12.028439 env[1316]: time="2025-05-15T10:05:12.028317272Z" level=info msg="shim disconnected" id=49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7 May 15 10:05:12.028439 env[1316]: time="2025-05-15T10:05:12.028367473Z" level=warning msg="cleaning up after shim disconnected" id=49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7 namespace=k8s.io May 15 10:05:12.028439 env[1316]: time="2025-05-15T10:05:12.028377673Z" level=info msg="cleaning up dead shim" May 15 10:05:12.039060 env[1316]: time="2025-05-15T10:05:12.038948263Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4102 runtime=io.containerd.runc.v2\n" May 15 10:05:12.782495 kubelet[2175]: E0515 10:05:12.782457 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:12.981646 env[1316]: time="2025-05-15T10:05:12.981607105Z" level=info msg="StopPodSandbox for \"990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f\"" May 15 10:05:12.982113 env[1316]: time="2025-05-15T10:05:12.982073388Z" level=info msg="Container to stop \"49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:05:12.984303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f-shm.mount: Deactivated successfully. May 15 10:05:13.007679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f-rootfs.mount: Deactivated successfully. May 15 10:05:13.013616 env[1316]: time="2025-05-15T10:05:13.013565992Z" level=info msg="shim disconnected" id=990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f May 15 10:05:13.014220 env[1316]: time="2025-05-15T10:05:13.014180236Z" level=warning msg="cleaning up after shim disconnected" id=990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f namespace=k8s.io May 15 10:05:13.014323 env[1316]: time="2025-05-15T10:05:13.014308757Z" level=info msg="cleaning up dead shim" May 15 10:05:13.023111 env[1316]: time="2025-05-15T10:05:13.023054892Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4134 runtime=io.containerd.runc.v2\n" May 15 10:05:13.023597 env[1316]: time="2025-05-15T10:05:13.023566335Z" level=info msg="TearDown network for sandbox \"990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f\" successfully" May 15 10:05:13.023717 env[1316]: time="2025-05-15T10:05:13.023699256Z" level=info msg="StopPodSandbox for \"990ef1ed9a605597a7935ba59cb118fe8ce251a9737242e0857ddd7ddc9f7d6f\" returns successfully" May 15 10:05:13.059854 kubelet[2175]: I0515 10:05:13.059722 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-ipsec-secrets\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.059854 kubelet[2175]: I0515 10:05:13.059766 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cni-path\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.059854 kubelet[2175]: I0515 10:05:13.059785 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-xtables-lock\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.059854 kubelet[2175]: I0515 10:05:13.059805 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-hubble-tls\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.059854 kubelet[2175]: I0515 10:05:13.059825 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-clustermesh-secrets\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.059854 kubelet[2175]: I0515 10:05:13.059843 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-config-path\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060164 kubelet[2175]: I0515 10:05:13.059857 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-net\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060164 kubelet[2175]: I0515 10:05:13.059873 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-kernel\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060164 kubelet[2175]: I0515 10:05:13.059893 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-etc-cni-netd\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060164 kubelet[2175]: I0515 10:05:13.059908 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-bpf-maps\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060164 kubelet[2175]: I0515 10:05:13.059925 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-run\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060164 kubelet[2175]: I0515 10:05:13.059943 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gfnw\" (UniqueName: \"kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-kube-api-access-5gfnw\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060635 kubelet[2175]: I0515 10:05:13.059959 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-hostproc\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060635 kubelet[2175]: I0515 10:05:13.059974 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-cgroup\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060635 kubelet[2175]: I0515 10:05:13.059988 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-lib-modules\") pod \"7b96d4dc-9440-4f64-8fad-516b3eed570f\" (UID: \"7b96d4dc-9440-4f64-8fad-516b3eed570f\") " May 15 10:05:13.060635 kubelet[2175]: I0515 10:05:13.060034 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.060635 kubelet[2175]: I0515 10:05:13.060071 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cni-path" (OuterVolumeSpecName: "cni-path") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.060767 kubelet[2175]: I0515 10:05:13.060087 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.060767 kubelet[2175]: I0515 10:05:13.060374 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.060767 kubelet[2175]: I0515 10:05:13.060565 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.060767 kubelet[2175]: I0515 10:05:13.060578 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.060767 kubelet[2175]: I0515 10:05:13.060605 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.061037 kubelet[2175]: I0515 10:05:13.060950 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-hostproc" (OuterVolumeSpecName: "hostproc") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.061037 kubelet[2175]: I0515 10:05:13.060989 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.061037 kubelet[2175]: I0515 10:05:13.061009 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:05:13.062662 kubelet[2175]: I0515 10:05:13.062613 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:05:13.064705 systemd[1]: var-lib-kubelet-pods-7b96d4dc\x2d9440\x2d4f64\x2d8fad\x2d516b3eed570f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 10:05:13.064852 systemd[1]: var-lib-kubelet-pods-7b96d4dc\x2d9440\x2d4f64\x2d8fad\x2d516b3eed570f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:05:13.067455 systemd[1]: var-lib-kubelet-pods-7b96d4dc\x2d9440\x2d4f64\x2d8fad\x2d516b3eed570f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5gfnw.mount: Deactivated successfully. May 15 10:05:13.067610 systemd[1]: var-lib-kubelet-pods-7b96d4dc\x2d9440\x2d4f64\x2d8fad\x2d516b3eed570f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:05:13.069150 kubelet[2175]: I0515 10:05:13.069086 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:05:13.069405 kubelet[2175]: I0515 10:05:13.069375 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-kube-api-access-5gfnw" (OuterVolumeSpecName: "kube-api-access-5gfnw") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "kube-api-access-5gfnw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:05:13.069479 kubelet[2175]: I0515 10:05:13.069464 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:05:13.069986 kubelet[2175]: I0515 10:05:13.069956 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7b96d4dc-9440-4f64-8fad-516b3eed570f" (UID: "7b96d4dc-9440-4f64-8fad-516b3eed570f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:05:13.160509 kubelet[2175]: I0515 10:05:13.160464 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160509 kubelet[2175]: I0515 10:05:13.160498 2175 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160509 kubelet[2175]: I0515 10:05:13.160509 2175 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160509 kubelet[2175]: I0515 10:05:13.160520 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160528 2175 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5gfnw\" (UniqueName: \"kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-kube-api-access-5gfnw\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160538 2175 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160546 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160554 2175 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160561 2175 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160570 2175 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160578 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160743 kubelet[2175]: I0515 10:05:13.160587 2175 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7b96d4dc-9440-4f64-8fad-516b3eed570f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160920 kubelet[2175]: I0515 10:05:13.160595 2175 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7b96d4dc-9440-4f64-8fad-516b3eed570f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160920 kubelet[2175]: I0515 10:05:13.160603 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b96d4dc-9440-4f64-8fad-516b3eed570f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.160920 kubelet[2175]: I0515 10:05:13.160610 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7b96d4dc-9440-4f64-8fad-516b3eed570f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:05:13.690849 kubelet[2175]: I0515 10:05:13.690780 2175 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T10:05:13Z","lastTransitionTime":"2025-05-15T10:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 10:05:13.985923 kubelet[2175]: I0515 10:05:13.985619 2175 scope.go:117] "RemoveContainer" containerID="49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7" May 15 10:05:13.987982 env[1316]: time="2025-05-15T10:05:13.987934007Z" level=info msg="RemoveContainer for \"49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7\"" May 15 10:05:13.993715 env[1316]: time="2025-05-15T10:05:13.991534030Z" level=info msg="RemoveContainer for \"49fc67346aed139f6e8c8da60ea693034a72be91c6876fff9d640796fe1976e7\" returns successfully" May 15 10:05:14.027921 kubelet[2175]: I0515 10:05:14.027861 2175 topology_manager.go:215] "Topology Admit Handler" podUID="c7fbc3f5-09e6-4e16-863d-4feb475cdbaa" podNamespace="kube-system" podName="cilium-fhln6" May 15 10:05:14.028075 kubelet[2175]: E0515 10:05:14.027936 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7b96d4dc-9440-4f64-8fad-516b3eed570f" containerName="mount-cgroup" May 15 10:05:14.028075 kubelet[2175]: I0515 10:05:14.027968 2175 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b96d4dc-9440-4f64-8fad-516b3eed570f" containerName="mount-cgroup" May 15 10:05:14.066576 kubelet[2175]: I0515 10:05:14.066506 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-etc-cni-netd\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066576 kubelet[2175]: I0515 10:05:14.066567 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-clustermesh-secrets\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066780 kubelet[2175]: I0515 10:05:14.066611 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-hubble-tls\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066780 kubelet[2175]: I0515 10:05:14.066651 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-xtables-lock\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066780 kubelet[2175]: I0515 10:05:14.066702 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-cilium-cgroup\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066780 kubelet[2175]: I0515 10:05:14.066737 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-cni-path\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066780 kubelet[2175]: I0515 10:05:14.066757 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-cilium-run\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066780 kubelet[2175]: I0515 10:05:14.066773 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-cilium-config-path\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066920 kubelet[2175]: I0515 10:05:14.066791 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-host-proc-sys-net\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066920 kubelet[2175]: I0515 10:05:14.066837 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-bpf-maps\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066920 kubelet[2175]: I0515 10:05:14.066866 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-lib-modules\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066920 kubelet[2175]: I0515 10:05:14.066884 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcnzq\" (UniqueName: \"kubernetes.io/projected/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-kube-api-access-wcnzq\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066920 kubelet[2175]: I0515 10:05:14.066901 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-cilium-ipsec-secrets\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.066920 kubelet[2175]: I0515 10:05:14.066917 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-hostproc\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.067058 kubelet[2175]: I0515 10:05:14.066933 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7fbc3f5-09e6-4e16-863d-4feb475cdbaa-host-proc-sys-kernel\") pod \"cilium-fhln6\" (UID: \"c7fbc3f5-09e6-4e16-863d-4feb475cdbaa\") " pod="kube-system/cilium-fhln6" May 15 10:05:14.331716 kubelet[2175]: E0515 10:05:14.331679 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:14.332753 env[1316]: time="2025-05-15T10:05:14.332693273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhln6,Uid:c7fbc3f5-09e6-4e16-863d-4feb475cdbaa,Namespace:kube-system,Attempt:0,}" May 15 10:05:14.363754 env[1316]: time="2025-05-15T10:05:14.363667538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:05:14.363754 env[1316]: time="2025-05-15T10:05:14.363709698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:05:14.363754 env[1316]: time="2025-05-15T10:05:14.363720698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:05:14.363968 env[1316]: time="2025-05-15T10:05:14.363861459Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86 pid=4163 runtime=io.containerd.runc.v2 May 15 10:05:14.405029 env[1316]: time="2025-05-15T10:05:14.404958305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhln6,Uid:c7fbc3f5-09e6-4e16-863d-4feb475cdbaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\"" May 15 10:05:14.406202 kubelet[2175]: E0515 10:05:14.406179 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:14.409474 env[1316]: time="2025-05-15T10:05:14.408610367Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:05:14.422040 env[1316]: time="2025-05-15T10:05:14.421964847Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d34be6c4d5cf2df2f2e6a4060ef3a8073039f88891a8d122f25cf6b3c843cd53\"" May 15 10:05:14.423406 env[1316]: time="2025-05-15T10:05:14.422643851Z" level=info msg="StartContainer for \"d34be6c4d5cf2df2f2e6a4060ef3a8073039f88891a8d122f25cf6b3c843cd53\"" May 15 10:05:14.481860 env[1316]: time="2025-05-15T10:05:14.481795244Z" level=info msg="StartContainer for \"d34be6c4d5cf2df2f2e6a4060ef3a8073039f88891a8d122f25cf6b3c843cd53\" returns successfully" May 15 10:05:14.518011 env[1316]: time="2025-05-15T10:05:14.517950381Z" level=info msg="shim disconnected" id=d34be6c4d5cf2df2f2e6a4060ef3a8073039f88891a8d122f25cf6b3c843cd53 May 15 10:05:14.518011 env[1316]: time="2025-05-15T10:05:14.518001421Z" level=warning msg="cleaning up after shim disconnected" id=d34be6c4d5cf2df2f2e6a4060ef3a8073039f88891a8d122f25cf6b3c843cd53 namespace=k8s.io May 15 10:05:14.518011 env[1316]: time="2025-05-15T10:05:14.518010541Z" level=info msg="cleaning up dead shim" May 15 10:05:14.525712 env[1316]: time="2025-05-15T10:05:14.525656467Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4246 runtime=io.containerd.runc.v2\n" May 15 10:05:14.992745 kubelet[2175]: E0515 10:05:14.992707 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:14.996477 env[1316]: time="2025-05-15T10:05:14.996426242Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:05:15.048117 env[1316]: time="2025-05-15T10:05:15.048041937Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"332f396695f22425bf96850e266f358e8db381ecb2a1622748230564352f3da3\"" May 15 10:05:15.050046 env[1316]: time="2025-05-15T10:05:15.048796581Z" level=info msg="StartContainer for \"332f396695f22425bf96850e266f358e8db381ecb2a1622748230564352f3da3\"" May 15 10:05:15.121269 env[1316]: time="2025-05-15T10:05:15.121184792Z" level=info msg="StartContainer for \"332f396695f22425bf96850e266f358e8db381ecb2a1622748230564352f3da3\" returns successfully" May 15 10:05:15.147919 env[1316]: time="2025-05-15T10:05:15.147870703Z" level=info msg="shim disconnected" id=332f396695f22425bf96850e266f358e8db381ecb2a1622748230564352f3da3 May 15 10:05:15.147919 env[1316]: time="2025-05-15T10:05:15.147917423Z" level=warning msg="cleaning up after shim disconnected" id=332f396695f22425bf96850e266f358e8db381ecb2a1622748230564352f3da3 namespace=k8s.io May 15 10:05:15.147919 env[1316]: time="2025-05-15T10:05:15.147928463Z" level=info msg="cleaning up dead shim" May 15 10:05:15.155901 env[1316]: time="2025-05-15T10:05:15.155821428Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4308 runtime=io.containerd.runc.v2\n" May 15 10:05:15.783013 kubelet[2175]: E0515 10:05:15.782971 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:15.784175 kubelet[2175]: I0515 10:05:15.784098 2175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b96d4dc-9440-4f64-8fad-516b3eed570f" path="/var/lib/kubelet/pods/7b96d4dc-9440-4f64-8fad-516b3eed570f/volumes" May 15 10:05:15.996675 kubelet[2175]: E0515 10:05:15.995966 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:15.999180 env[1316]: time="2025-05-15T10:05:15.999123934Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:05:16.019850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662123319.mount: Deactivated successfully. May 15 10:05:16.021644 env[1316]: time="2025-05-15T10:05:16.021573695Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c6d48f4d474a927f44f46b9d2946f933e8377aa5f19f069f13df2f7de6d325ba\"" May 15 10:05:16.022432 env[1316]: time="2025-05-15T10:05:16.022401940Z" level=info msg="StartContainer for \"c6d48f4d474a927f44f46b9d2946f933e8377aa5f19f069f13df2f7de6d325ba\"" May 15 10:05:16.089723 env[1316]: time="2025-05-15T10:05:16.089613621Z" level=info msg="StartContainer for \"c6d48f4d474a927f44f46b9d2946f933e8377aa5f19f069f13df2f7de6d325ba\" returns successfully" May 15 10:05:16.118470 env[1316]: time="2025-05-15T10:05:16.118415856Z" level=info msg="shim disconnected" id=c6d48f4d474a927f44f46b9d2946f933e8377aa5f19f069f13df2f7de6d325ba May 15 10:05:16.118470 env[1316]: time="2025-05-15T10:05:16.118468057Z" level=warning msg="cleaning up after shim disconnected" id=c6d48f4d474a927f44f46b9d2946f933e8377aa5f19f069f13df2f7de6d325ba namespace=k8s.io May 15 10:05:16.118470 env[1316]: time="2025-05-15T10:05:16.118479657Z" level=info msg="cleaning up dead shim" May 15 10:05:16.125441 env[1316]: time="2025-05-15T10:05:16.125388174Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4365 runtime=io.containerd.runc.v2\n" May 15 10:05:16.172822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6d48f4d474a927f44f46b9d2946f933e8377aa5f19f069f13df2f7de6d325ba-rootfs.mount: Deactivated successfully. May 15 10:05:16.838526 kubelet[2175]: E0515 10:05:16.838471 2175 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:05:16.999740 kubelet[2175]: E0515 10:05:16.999694 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:17.002487 env[1316]: time="2025-05-15T10:05:17.002444931Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:05:17.018234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282552851.mount: Deactivated successfully. May 15 10:05:17.023536 env[1316]: time="2025-05-15T10:05:17.023469878Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"221c4df069128845f08d733d1cddcd54f261b4ac692411b99bcbbc43dbd27cca\"" May 15 10:05:17.024012 env[1316]: time="2025-05-15T10:05:17.023986081Z" level=info msg="StartContainer for \"221c4df069128845f08d733d1cddcd54f261b4ac692411b99bcbbc43dbd27cca\"" May 15 10:05:17.077798 env[1316]: time="2025-05-15T10:05:17.077750155Z" level=info msg="StartContainer for \"221c4df069128845f08d733d1cddcd54f261b4ac692411b99bcbbc43dbd27cca\" returns successfully" May 15 10:05:17.099790 env[1316]: time="2025-05-15T10:05:17.099674466Z" level=info msg="shim disconnected" id=221c4df069128845f08d733d1cddcd54f261b4ac692411b99bcbbc43dbd27cca May 15 10:05:17.099790 env[1316]: time="2025-05-15T10:05:17.099723547Z" level=warning msg="cleaning up after shim disconnected" id=221c4df069128845f08d733d1cddcd54f261b4ac692411b99bcbbc43dbd27cca namespace=k8s.io May 15 10:05:17.099790 env[1316]: time="2025-05-15T10:05:17.099733947Z" level=info msg="cleaning up dead shim" May 15 10:05:17.106731 env[1316]: time="2025-05-15T10:05:17.106632342Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:05:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4420 runtime=io.containerd.runc.v2\n" May 15 10:05:17.172837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-221c4df069128845f08d733d1cddcd54f261b4ac692411b99bcbbc43dbd27cca-rootfs.mount: Deactivated successfully. May 15 10:05:18.004647 kubelet[2175]: E0515 10:05:18.004583 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:18.008270 env[1316]: time="2025-05-15T10:05:18.008144891Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:05:18.024532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588987144.mount: Deactivated successfully. May 15 10:05:18.027383 env[1316]: time="2025-05-15T10:05:18.027256063Z" level=info msg="CreateContainer within sandbox \"e831462027ed116a92e6c1f74a43d77acd10930456d377e121eb745c73ceba86\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"00c616b5bb67c35c059c0cf367e933b2b564812669572b794760f895dc25602f\"" May 15 10:05:18.029061 env[1316]: time="2025-05-15T10:05:18.027748425Z" level=info msg="StartContainer for \"00c616b5bb67c35c059c0cf367e933b2b564812669572b794760f895dc25602f\"" May 15 10:05:18.102283 env[1316]: time="2025-05-15T10:05:18.102172304Z" level=info msg="StartContainer for \"00c616b5bb67c35c059c0cf367e933b2b564812669572b794760f895dc25602f\" returns successfully" May 15 10:05:18.387231 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 15 10:05:19.008721 kubelet[2175]: E0515 10:05:19.008664 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:19.025721 kubelet[2175]: I0515 10:05:19.025661 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fhln6" podStartSLOduration=5.025633783 podStartE2EDuration="5.025633783s" podCreationTimestamp="2025-05-15 10:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:05:19.024985821 +0000 UTC m=+87.344927488" watchObservedRunningTime="2025-05-15 10:05:19.025633783 +0000 UTC m=+87.345575410" May 15 10:05:20.179723 systemd[1]: run-containerd-runc-k8s.io-00c616b5bb67c35c059c0cf367e933b2b564812669572b794760f895dc25602f-runc.u0uBZt.mount: Deactivated successfully. May 15 10:05:20.333740 kubelet[2175]: E0515 10:05:20.333701 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:21.281249 systemd-networkd[1101]: lxc_health: Link UP May 15 10:05:21.302269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:05:21.302215 systemd-networkd[1101]: lxc_health: Gained carrier May 15 10:05:22.334816 kubelet[2175]: E0515 10:05:22.334777 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:23.015188 kubelet[2175]: E0515 10:05:23.015143 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:23.294330 systemd-networkd[1101]: lxc_health: Gained IPv6LL May 15 10:05:24.017068 kubelet[2175]: E0515 10:05:24.017025 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:24.782797 kubelet[2175]: E0515 10:05:24.782715 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:05:26.579959 systemd[1]: run-containerd-runc-k8s.io-00c616b5bb67c35c059c0cf367e933b2b564812669572b794760f895dc25602f-runc.IRaeTb.mount: Deactivated successfully. May 15 10:05:26.638905 sshd[4000]: pam_unix(sshd:session): session closed for user core May 15 10:05:26.642339 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:57812.service: Deactivated successfully. May 15 10:05:26.643369 systemd[1]: session-25.scope: Deactivated successfully. May 15 10:05:26.643383 systemd-logind[1306]: Session 25 logged out. Waiting for processes to exit. May 15 10:05:26.644406 systemd-logind[1306]: Removed session 25.