May 8 00:44:19.719945 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:44:19.719965 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed May 7 23:24:31 -00 2025 May 8 00:44:19.719973 kernel: efi: EFI v2.70 by EDK II May 8 00:44:19.719979 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 8 00:44:19.719983 kernel: random: crng init done May 8 00:44:19.719989 kernel: ACPI: Early table checksum verification disabled May 8 00:44:19.719995 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 8 00:44:19.720001 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:44:19.720007 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720012 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720018 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720023 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720028 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720033 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720041 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720047 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720053 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:44:19.720058 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:44:19.720064 kernel: NUMA: Failed to initialise from firmware May 8 00:44:19.720070 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:44:19.720075 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 8 00:44:19.720081 kernel: Zone ranges: May 8 00:44:19.720087 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:44:19.720093 kernel: DMA32 empty May 8 00:44:19.720099 kernel: Normal empty May 8 00:44:19.720104 kernel: Movable zone start for each node May 8 00:44:19.720110 kernel: Early memory node ranges May 8 00:44:19.720115 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 8 00:44:19.720121 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 8 00:44:19.720127 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 8 00:44:19.720132 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 8 00:44:19.720138 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 8 00:44:19.720144 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 8 00:44:19.720149 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 8 00:44:19.720155 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:44:19.720162 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:44:19.720167 kernel: psci: probing for conduit method from ACPI. May 8 00:44:19.720173 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:44:19.720179 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:44:19.720184 kernel: psci: Trusted OS migration not required May 8 00:44:19.720192 kernel: psci: SMC Calling Convention v1.1 May 8 00:44:19.720198 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:44:19.720206 kernel: ACPI: SRAT not present May 8 00:44:19.720212 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 8 00:44:19.720218 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 8 00:44:19.720224 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:44:19.720230 kernel: Detected PIPT I-cache on CPU0 May 8 00:44:19.720237 kernel: CPU features: detected: GIC system register CPU interface May 8 00:44:19.720243 kernel: CPU features: detected: Hardware dirty bit management May 8 00:44:19.720249 kernel: CPU features: detected: Spectre-v4 May 8 00:44:19.720255 kernel: CPU features: detected: Spectre-BHB May 8 00:44:19.720262 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:44:19.720268 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:44:19.720274 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:44:19.720280 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:44:19.720287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:44:19.720292 kernel: Policy zone: DMA May 8 00:44:19.720300 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:44:19.720307 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:44:19.720313 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:44:19.720319 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:44:19.720325 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:44:19.720332 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) May 8 00:44:19.720339 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:44:19.720345 kernel: trace event string verifier disabled May 8 00:44:19.720351 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:44:19.720357 kernel: rcu: RCU event tracing is enabled. May 8 00:44:19.720364 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:44:19.720370 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:44:19.720376 kernel: Tracing variant of Tasks RCU enabled. May 8 00:44:19.720382 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:44:19.720388 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:44:19.720394 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:44:19.720401 kernel: GICv3: 256 SPIs implemented May 8 00:44:19.720408 kernel: GICv3: 0 Extended SPIs implemented May 8 00:44:19.720414 kernel: GICv3: Distributor has no Range Selector support May 8 00:44:19.720420 kernel: Root IRQ handler: gic_handle_irq May 8 00:44:19.720426 kernel: GICv3: 16 PPIs implemented May 8 00:44:19.720432 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:44:19.720438 kernel: ACPI: SRAT not present May 8 00:44:19.720443 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:44:19.720450 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:44:19.720456 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 8 00:44:19.720462 kernel: GICv3: using LPI property table @0x00000000400d0000 May 8 00:44:19.720468 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 8 00:44:19.720475 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:44:19.720481 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:44:19.720488 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:44:19.720494 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:44:19.720500 kernel: arm-pv: using stolen time PV May 8 00:44:19.720506 kernel: Console: colour dummy device 80x25 May 8 00:44:19.720513 kernel: ACPI: Core revision 20210730 May 8 00:44:19.720519 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:44:19.720525 kernel: pid_max: default: 32768 minimum: 301 May 8 00:44:19.720531 kernel: LSM: Security Framework initializing May 8 00:44:19.720539 kernel: SELinux: Initializing. May 8 00:44:19.720545 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:44:19.720551 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:44:19.720558 kernel: rcu: Hierarchical SRCU implementation. May 8 00:44:19.720564 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:44:19.720570 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:44:19.720576 kernel: Remapping and enabling EFI services. May 8 00:44:19.720582 kernel: smp: Bringing up secondary CPUs ... May 8 00:44:19.720588 kernel: Detected PIPT I-cache on CPU1 May 8 00:44:19.720596 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:44:19.720602 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 8 00:44:19.720609 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:44:19.720615 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:44:19.720621 kernel: Detected PIPT I-cache on CPU2 May 8 00:44:19.720628 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:44:19.720634 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 8 00:44:19.720640 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:44:19.720659 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:44:19.720666 kernel: Detected PIPT I-cache on CPU3 May 8 00:44:19.720674 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:44:19.720680 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 8 00:44:19.720686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:44:19.720693 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:44:19.720704 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:44:19.720711 kernel: SMP: Total of 4 processors activated. May 8 00:44:19.720718 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:44:19.720725 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:44:19.720731 kernel: CPU features: detected: Common not Private translations May 8 00:44:19.720738 kernel: CPU features: detected: CRC32 instructions May 8 00:44:19.720744 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:44:19.720751 kernel: CPU features: detected: LSE atomic instructions May 8 00:44:19.720758 kernel: CPU features: detected: Privileged Access Never May 8 00:44:19.720765 kernel: CPU features: detected: RAS Extension Support May 8 00:44:19.720771 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:44:19.720778 kernel: CPU: All CPU(s) started at EL1 May 8 00:44:19.720790 kernel: alternatives: patching kernel code May 8 00:44:19.720798 kernel: devtmpfs: initialized May 8 00:44:19.720805 kernel: KASLR enabled May 8 00:44:19.720811 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:44:19.720818 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:44:19.720824 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:44:19.720831 kernel: SMBIOS 3.0.0 present. May 8 00:44:19.720838 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 8 00:44:19.720844 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:44:19.720851 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:44:19.720859 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:44:19.720865 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:44:19.720872 kernel: audit: initializing netlink subsys (disabled) May 8 00:44:19.720879 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 8 00:44:19.720885 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:44:19.720892 kernel: cpuidle: using governor menu May 8 00:44:19.720898 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:44:19.720905 kernel: ASID allocator initialised with 32768 entries May 8 00:44:19.720911 kernel: ACPI: bus type PCI registered May 8 00:44:19.720919 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:44:19.720925 kernel: Serial: AMBA PL011 UART driver May 8 00:44:19.720932 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:44:19.720938 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:44:19.720945 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:44:19.720951 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:44:19.720958 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:44:19.720964 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:44:19.720971 kernel: ACPI: Added _OSI(Module Device) May 8 00:44:19.720979 kernel: ACPI: Added _OSI(Processor Device) May 8 00:44:19.720985 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:44:19.720992 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:44:19.720998 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:44:19.721005 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:44:19.721011 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:44:19.721017 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:44:19.721024 kernel: ACPI: Interpreter enabled May 8 00:44:19.721030 kernel: ACPI: Using GIC for interrupt routing May 8 00:44:19.721038 kernel: ACPI: MCFG table detected, 1 entries May 8 00:44:19.721045 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:44:19.721051 kernel: printk: console [ttyAMA0] enabled May 8 00:44:19.721058 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:44:19.721193 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:44:19.721258 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:44:19.721317 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:44:19.721378 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:44:19.721435 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:44:19.721443 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:44:19.721450 kernel: PCI host bridge to bus 0000:00 May 8 00:44:19.721521 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:44:19.721576 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:44:19.721712 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:44:19.721794 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:44:19.721903 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:44:19.722024 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:44:19.722090 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:44:19.722151 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:44:19.722232 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:44:19.722320 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:44:19.722390 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:44:19.722450 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:44:19.722504 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:44:19.722559 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:44:19.722617 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:44:19.722626 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:44:19.722633 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:44:19.722644 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:44:19.722701 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:44:19.722713 kernel: iommu: Default domain type: Translated May 8 00:44:19.722727 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:44:19.722737 kernel: vgaarb: loaded May 8 00:44:19.722745 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:44:19.722754 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:44:19.722763 kernel: PTP clock support registered May 8 00:44:19.722773 kernel: Registered efivars operations May 8 00:44:19.722791 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:44:19.722804 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:44:19.722816 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:44:19.722824 kernel: pnp: PnP ACPI init May 8 00:44:19.722906 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:44:19.722917 kernel: pnp: PnP ACPI: found 1 devices May 8 00:44:19.722926 kernel: NET: Registered PF_INET protocol family May 8 00:44:19.722934 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:44:19.722940 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:44:19.722950 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:44:19.722958 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:44:19.722966 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:44:19.722973 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:44:19.722980 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:44:19.722987 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:44:19.722993 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:44:19.723000 kernel: PCI: CLS 0 bytes, default 64 May 8 00:44:19.723007 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:44:19.723015 kernel: kvm [1]: HYP mode not available May 8 00:44:19.723022 kernel: Initialise system trusted keyrings May 8 00:44:19.723028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:44:19.723035 kernel: Key type asymmetric registered May 8 00:44:19.723041 kernel: Asymmetric key parser 'x509' registered May 8 00:44:19.723048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:44:19.723054 kernel: io scheduler mq-deadline registered May 8 00:44:19.723061 kernel: io scheduler kyber registered May 8 00:44:19.723068 kernel: io scheduler bfq registered May 8 00:44:19.723076 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:44:19.723083 kernel: ACPI: button: Power Button [PWRB] May 8 00:44:19.723090 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:44:19.723149 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:44:19.723158 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:44:19.723165 kernel: thunder_xcv, ver 1.0 May 8 00:44:19.723172 kernel: thunder_bgx, ver 1.0 May 8 00:44:19.723178 kernel: nicpf, ver 1.0 May 8 00:44:19.723184 kernel: nicvf, ver 1.0 May 8 00:44:19.723257 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:44:19.723313 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:44:19 UTC (1746665059) May 8 00:44:19.723322 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:44:19.723329 kernel: NET: Registered PF_INET6 protocol family May 8 00:44:19.723335 kernel: Segment Routing with IPv6 May 8 00:44:19.723342 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:44:19.723348 kernel: NET: Registered PF_PACKET protocol family May 8 00:44:19.723355 kernel: Key type dns_resolver registered May 8 00:44:19.723364 kernel: registered taskstats version 1 May 8 00:44:19.723371 kernel: Loading compiled-in X.509 certificates May 8 00:44:19.723378 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: 47302b466ab2df930dd804d2ee9c8ab44de4e2dc' May 8 00:44:19.723385 kernel: Key type .fscrypt registered May 8 00:44:19.723392 kernel: Key type fscrypt-provisioning registered May 8 00:44:19.723399 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:44:19.723406 kernel: ima: Allocated hash algorithm: sha1 May 8 00:44:19.723412 kernel: ima: No architecture policies found May 8 00:44:19.723419 kernel: clk: Disabling unused clocks May 8 00:44:19.723427 kernel: Freeing unused kernel memory: 36416K May 8 00:44:19.723435 kernel: Run /init as init process May 8 00:44:19.723442 kernel: with arguments: May 8 00:44:19.723450 kernel: /init May 8 00:44:19.723458 kernel: with environment: May 8 00:44:19.723465 kernel: HOME=/ May 8 00:44:19.723473 kernel: TERM=linux May 8 00:44:19.723482 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:44:19.723493 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:44:19.723511 systemd[1]: Detected virtualization kvm. May 8 00:44:19.723520 systemd[1]: Detected architecture arm64. May 8 00:44:19.723527 systemd[1]: Running in initrd. May 8 00:44:19.723534 systemd[1]: No hostname configured, using default hostname. May 8 00:44:19.723541 systemd[1]: Hostname set to . May 8 00:44:19.723548 systemd[1]: Initializing machine ID from VM UUID. May 8 00:44:19.723555 systemd[1]: Queued start job for default target initrd.target. May 8 00:44:19.723564 systemd[1]: Started systemd-ask-password-console.path. May 8 00:44:19.723571 systemd[1]: Reached target cryptsetup.target. May 8 00:44:19.723577 systemd[1]: Reached target paths.target. May 8 00:44:19.723584 systemd[1]: Reached target slices.target. May 8 00:44:19.723594 systemd[1]: Reached target swap.target. May 8 00:44:19.723603 systemd[1]: Reached target timers.target. May 8 00:44:19.723615 systemd[1]: Listening on iscsid.socket. May 8 00:44:19.723627 systemd[1]: Listening on iscsiuio.socket. May 8 00:44:19.723643 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:44:19.723666 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:44:19.723678 systemd[1]: Listening on systemd-journald.socket. May 8 00:44:19.723687 systemd[1]: Listening on systemd-networkd.socket. May 8 00:44:19.723694 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:44:19.723704 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:44:19.723712 systemd[1]: Reached target sockets.target. May 8 00:44:19.723721 systemd[1]: Starting kmod-static-nodes.service... May 8 00:44:19.723731 systemd[1]: Finished network-cleanup.service. May 8 00:44:19.723739 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:44:19.723748 systemd[1]: Starting systemd-journald.service... May 8 00:44:19.723756 systemd[1]: Starting systemd-modules-load.service... May 8 00:44:19.723765 systemd[1]: Starting systemd-resolved.service... May 8 00:44:19.723774 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:44:19.723781 systemd[1]: Finished kmod-static-nodes.service. May 8 00:44:19.723797 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:44:19.723805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:44:19.723818 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:44:19.723832 kernel: audit: type=1130 audit(1746665059.719:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.723841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:44:19.723856 kernel: audit: type=1130 audit(1746665059.723:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.723872 systemd-journald[290]: Journal started May 8 00:44:19.723927 systemd-journald[290]: Runtime Journal (/run/log/journal/819d6b8a66b34b98aba77e52f59ec651) is 6.0M, max 48.7M, 42.6M free. May 8 00:44:19.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.715248 systemd-modules-load[291]: Inserted module 'overlay' May 8 00:44:19.726478 systemd[1]: Started systemd-journald.service. May 8 00:44:19.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.730345 kernel: audit: type=1130 audit(1746665059.726:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.729837 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:44:19.739016 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:44:19.739831 systemd-resolved[292]: Positive Trust Anchors: May 8 00:44:19.739845 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:44:19.739872 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:44:19.744429 systemd-resolved[292]: Defaulting to hostname 'linux'. May 8 00:44:19.747745 kernel: Bridge firewalling registered May 8 00:44:19.745228 systemd[1]: Started systemd-resolved.service. May 8 00:44:19.750017 kernel: audit: type=1130 audit(1746665059.747:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.745867 systemd-modules-load[291]: Inserted module 'br_netfilter' May 8 00:44:19.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.749742 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:44:19.752878 systemd[1]: Reached target nss-lookup.target. May 8 00:44:19.754429 kernel: audit: type=1130 audit(1746665059.749:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.754745 systemd[1]: Starting dracut-cmdline.service... May 8 00:44:19.757672 kernel: SCSI subsystem initialized May 8 00:44:19.764214 dracut-cmdline[308]: dracut-dracut-053 May 8 00:44:19.765557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:44:19.765576 kernel: device-mapper: uevent: version 1.0.3 May 8 00:44:19.765585 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:44:19.766976 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:44:19.769125 systemd-modules-load[291]: Inserted module 'dm_multipath' May 8 00:44:19.770941 systemd[1]: Finished systemd-modules-load.service. May 8 00:44:19.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.772905 systemd[1]: Starting systemd-sysctl.service... May 8 00:44:19.774868 kernel: audit: type=1130 audit(1746665059.770:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.779858 systemd[1]: Finished systemd-sysctl.service. May 8 00:44:19.782695 kernel: audit: type=1130 audit(1746665059.779:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.830670 kernel: Loading iSCSI transport class v2.0-870. May 8 00:44:19.844668 kernel: iscsi: registered transport (tcp) May 8 00:44:19.861696 kernel: iscsi: registered transport (qla4xxx) May 8 00:44:19.861747 kernel: QLogic iSCSI HBA Driver May 8 00:44:19.897182 systemd[1]: Finished dracut-cmdline.service. May 8 00:44:19.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.898644 systemd[1]: Starting dracut-pre-udev.service... May 8 00:44:19.900901 kernel: audit: type=1130 audit(1746665059.897:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:19.941670 kernel: raid6: neonx8 gen() 13814 MB/s May 8 00:44:19.958669 kernel: raid6: neonx8 xor() 10836 MB/s May 8 00:44:19.975663 kernel: raid6: neonx4 gen() 13565 MB/s May 8 00:44:19.992658 kernel: raid6: neonx4 xor() 11297 MB/s May 8 00:44:20.009659 kernel: raid6: neonx2 gen() 12965 MB/s May 8 00:44:20.026670 kernel: raid6: neonx2 xor() 10637 MB/s May 8 00:44:20.043666 kernel: raid6: neonx1 gen() 10469 MB/s May 8 00:44:20.060661 kernel: raid6: neonx1 xor() 8783 MB/s May 8 00:44:20.077661 kernel: raid6: int64x8 gen() 6272 MB/s May 8 00:44:20.094660 kernel: raid6: int64x8 xor() 3542 MB/s May 8 00:44:20.111667 kernel: raid6: int64x4 gen() 7223 MB/s May 8 00:44:20.128661 kernel: raid6: int64x4 xor() 3850 MB/s May 8 00:44:20.145660 kernel: raid6: int64x2 gen() 6153 MB/s May 8 00:44:20.162662 kernel: raid6: int64x2 xor() 3318 MB/s May 8 00:44:20.179659 kernel: raid6: int64x1 gen() 5044 MB/s May 8 00:44:20.196881 kernel: raid6: int64x1 xor() 2644 MB/s May 8 00:44:20.196894 kernel: raid6: using algorithm neonx8 gen() 13814 MB/s May 8 00:44:20.196903 kernel: raid6: .... xor() 10836 MB/s, rmw enabled May 8 00:44:20.196911 kernel: raid6: using neon recovery algorithm May 8 00:44:20.207662 kernel: xor: measuring software checksum speed May 8 00:44:20.207679 kernel: 8regs : 17227 MB/sec May 8 00:44:20.209075 kernel: 32regs : 19613 MB/sec May 8 00:44:20.209086 kernel: arm64_neon : 27644 MB/sec May 8 00:44:20.209095 kernel: xor: using function: arm64_neon (27644 MB/sec) May 8 00:44:20.265670 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 8 00:44:20.275818 systemd[1]: Finished dracut-pre-udev.service. May 8 00:44:20.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:20.277335 systemd[1]: Starting systemd-udevd.service... May 8 00:44:20.279742 kernel: audit: type=1130 audit(1746665060.275:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:20.276000 audit: BPF prog-id=7 op=LOAD May 8 00:44:20.276000 audit: BPF prog-id=8 op=LOAD May 8 00:44:20.290182 systemd-udevd[493]: Using default interface naming scheme 'v252'. May 8 00:44:20.294367 systemd[1]: Started systemd-udevd.service. May 8 00:44:20.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:20.295837 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:44:20.307798 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation May 8 00:44:20.337062 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:44:20.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:20.338481 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:44:20.373675 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:44:20.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:20.407677 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:44:20.410291 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:44:20.410306 kernel: GPT:9289727 != 19775487 May 8 00:44:20.410314 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:44:20.410323 kernel: GPT:9289727 != 19775487 May 8 00:44:20.410331 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:44:20.410340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:44:20.425998 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:44:20.427391 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:44:20.431911 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:44:20.433667 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (542) May 8 00:44:20.440357 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:44:20.446214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:44:20.447727 systemd[1]: Starting disk-uuid.service... May 8 00:44:20.455708 disk-uuid[565]: Primary Header is updated. May 8 00:44:20.455708 disk-uuid[565]: Secondary Entries is updated. May 8 00:44:20.455708 disk-uuid[565]: Secondary Header is updated. May 8 00:44:20.458671 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:44:21.471255 disk-uuid[566]: The operation has completed successfully. May 8 00:44:21.472427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:44:21.490321 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:44:21.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.490421 systemd[1]: Finished disk-uuid.service. May 8 00:44:21.494301 systemd[1]: Starting verity-setup.service... May 8 00:44:21.511168 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:44:21.532655 systemd[1]: Found device dev-mapper-usr.device. May 8 00:44:21.534784 systemd[1]: Mounting sysusr-usr.mount... May 8 00:44:21.536524 systemd[1]: Finished verity-setup.service. May 8 00:44:21.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.584706 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:44:21.584894 systemd[1]: Mounted sysusr-usr.mount. May 8 00:44:21.585503 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:44:21.586261 systemd[1]: Starting ignition-setup.service... May 8 00:44:21.587929 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:44:21.595179 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:44:21.595222 kernel: BTRFS info (device vda6): using free space tree May 8 00:44:21.595232 kernel: BTRFS info (device vda6): has skinny extents May 8 00:44:21.604375 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:44:21.609914 systemd[1]: Finished ignition-setup.service. May 8 00:44:21.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.611347 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:44:21.674155 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:44:21.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.674000 audit: BPF prog-id=9 op=LOAD May 8 00:44:21.676078 systemd[1]: Starting systemd-networkd.service... May 8 00:44:21.686901 ignition[652]: Ignition 2.14.0 May 8 00:44:21.686912 ignition[652]: Stage: fetch-offline May 8 00:44:21.686950 ignition[652]: no configs at "/usr/lib/ignition/base.d" May 8 00:44:21.686960 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:21.687095 ignition[652]: parsed url from cmdline: "" May 8 00:44:21.687098 ignition[652]: no config URL provided May 8 00:44:21.687103 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:44:21.687110 ignition[652]: no config at "/usr/lib/ignition/user.ign" May 8 00:44:21.687129 ignition[652]: op(1): [started] loading QEMU firmware config module May 8 00:44:21.687137 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:44:21.691078 ignition[652]: op(1): [finished] loading QEMU firmware config module May 8 00:44:21.698021 systemd-networkd[741]: lo: Link UP May 8 00:44:21.698035 systemd-networkd[741]: lo: Gained carrier May 8 00:44:21.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.698394 systemd-networkd[741]: Enumeration completed May 8 00:44:21.698575 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:44:21.698675 systemd[1]: Started systemd-networkd.service. May 8 00:44:21.699608 systemd-networkd[741]: eth0: Link UP May 8 00:44:21.699612 systemd-networkd[741]: eth0: Gained carrier May 8 00:44:21.700084 systemd[1]: Reached target network.target. May 8 00:44:21.702032 systemd[1]: Starting iscsiuio.service... May 8 00:44:21.711070 systemd[1]: Started iscsiuio.service. May 8 00:44:21.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.712630 systemd[1]: Starting iscsid.service... May 8 00:44:21.715736 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:44:21.717001 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:44:21.717001 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:44:21.717001 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:44:21.717001 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:44:21.717001 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:44:21.717001 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:44:21.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.718987 systemd[1]: Started iscsid.service. May 8 00:44:21.724212 systemd[1]: Starting dracut-initqueue.service... May 8 00:44:21.734837 systemd[1]: Finished dracut-initqueue.service. May 8 00:44:21.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.735701 systemd[1]: Reached target remote-fs-pre.target. May 8 00:44:21.736916 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:44:21.738222 systemd[1]: Reached target remote-fs.target. May 8 00:44:21.740338 systemd[1]: Starting dracut-pre-mount.service... May 8 00:44:21.748977 systemd[1]: Finished dracut-pre-mount.service. May 8 00:44:21.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.752622 ignition[652]: parsing config with SHA512: 87a4f9fc3c1c275bfdc9f47ac984104aa0e0d3d09fd81c5eca803197bc0fa608c68d93507aff91b580b695bd01827de46e39fedfb92b84871fc90352ab6d160d May 8 00:44:21.760863 unknown[652]: fetched base config from "system" May 8 00:44:21.760879 unknown[652]: fetched user config from "qemu" May 8 00:44:21.761556 ignition[652]: fetch-offline: fetch-offline passed May 8 00:44:21.762434 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:44:21.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.761619 ignition[652]: Ignition finished successfully May 8 00:44:21.763547 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:44:21.764357 systemd[1]: Starting ignition-kargs.service... May 8 00:44:21.773727 ignition[763]: Ignition 2.14.0 May 8 00:44:21.773739 ignition[763]: Stage: kargs May 8 00:44:21.773853 ignition[763]: no configs at "/usr/lib/ignition/base.d" May 8 00:44:21.775914 systemd[1]: Finished ignition-kargs.service. May 8 00:44:21.773864 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:21.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.777858 systemd[1]: Starting ignition-disks.service... May 8 00:44:21.774855 ignition[763]: kargs: kargs passed May 8 00:44:21.774899 ignition[763]: Ignition finished successfully May 8 00:44:21.784512 ignition[769]: Ignition 2.14.0 May 8 00:44:21.784523 ignition[769]: Stage: disks May 8 00:44:21.784623 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 8 00:44:21.784634 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:21.785669 ignition[769]: disks: disks passed May 8 00:44:21.785719 ignition[769]: Ignition finished successfully May 8 00:44:21.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.787880 systemd[1]: Finished ignition-disks.service. May 8 00:44:21.788743 systemd[1]: Reached target initrd-root-device.target. May 8 00:44:21.789677 systemd[1]: Reached target local-fs-pre.target. May 8 00:44:21.790708 systemd[1]: Reached target local-fs.target. May 8 00:44:21.791716 systemd[1]: Reached target sysinit.target. May 8 00:44:21.792892 systemd[1]: Reached target basic.target. May 8 00:44:21.794702 systemd[1]: Starting systemd-fsck-root.service... May 8 00:44:21.805494 systemd-fsck[777]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 8 00:44:21.809743 systemd[1]: Finished systemd-fsck-root.service. May 8 00:44:21.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.813513 systemd[1]: Mounting sysroot.mount... May 8 00:44:21.821525 systemd[1]: Mounted sysroot.mount. May 8 00:44:21.822539 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:44:21.822193 systemd[1]: Reached target initrd-root-fs.target. May 8 00:44:21.824102 systemd[1]: Mounting sysroot-usr.mount... May 8 00:44:21.824841 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:44:21.824882 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:44:21.824908 systemd[1]: Reached target ignition-diskful.target. May 8 00:44:21.827102 systemd[1]: Mounted sysroot-usr.mount. May 8 00:44:21.829057 systemd[1]: Starting initrd-setup-root.service... May 8 00:44:21.833402 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:44:21.837205 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory May 8 00:44:21.841044 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:44:21.844826 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:44:21.872203 systemd[1]: Finished initrd-setup-root.service. May 8 00:44:21.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.873671 systemd[1]: Starting ignition-mount.service... May 8 00:44:21.875324 systemd[1]: Starting sysroot-boot.service... May 8 00:44:21.879264 bash[828]: umount: /sysroot/usr/share/oem: not mounted. May 8 00:44:21.888375 ignition[829]: INFO : Ignition 2.14.0 May 8 00:44:21.888375 ignition[829]: INFO : Stage: mount May 8 00:44:21.888375 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:44:21.888375 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:21.891181 ignition[829]: INFO : mount: mount passed May 8 00:44:21.891181 ignition[829]: INFO : Ignition finished successfully May 8 00:44:21.892167 systemd[1]: Finished ignition-mount.service. May 8 00:44:21.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:21.896270 systemd[1]: Finished sysroot-boot.service. May 8 00:44:21.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:22.543030 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:44:22.550227 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) May 8 00:44:22.550258 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:44:22.550269 kernel: BTRFS info (device vda6): using free space tree May 8 00:44:22.551674 kernel: BTRFS info (device vda6): has skinny extents May 8 00:44:22.554063 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:44:22.555437 systemd[1]: Starting ignition-files.service... May 8 00:44:22.569682 ignition[858]: INFO : Ignition 2.14.0 May 8 00:44:22.569682 ignition[858]: INFO : Stage: files May 8 00:44:22.570963 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:44:22.570963 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:22.570963 ignition[858]: DEBUG : files: compiled without relabeling support, skipping May 8 00:44:22.573524 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:44:22.573524 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:44:22.576869 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:44:22.577875 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:44:22.577875 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:44:22.577627 unknown[858]: wrote ssh authorized keys file for user: core May 8 00:44:22.580800 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:44:22.580800 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 00:44:22.580800 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:44:22.580800 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:44:22.705678 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:44:22.975117 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:44:22.976621 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:44:22.976621 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 00:44:23.284688 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 8 00:44:23.305122 systemd-networkd[741]: eth0: Gained IPv6LL May 8 00:44:23.388660 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:44:23.389969 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 8 00:44:23.619936 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 8 00:44:23.988993 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 8 00:44:23.988993 ignition[858]: INFO : files: op(d): [started] processing unit "containerd.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:44:23.991817 ignition[858]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 00:44:23.991817 ignition[858]: INFO : files: op(d): [finished] processing unit "containerd.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:44:23.991817 ignition[858]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:44:24.032061 ignition[858]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:44:24.033211 ignition[858]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:44:24.033211 ignition[858]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:44:24.033211 ignition[858]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:44:24.033211 ignition[858]: INFO : files: files passed May 8 00:44:24.033211 ignition[858]: INFO : Ignition finished successfully May 8 00:44:24.041662 kernel: kauditd_printk_skb: 23 callbacks suppressed May 8 00:44:24.041683 kernel: audit: type=1130 audit(1746665064.034:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.033617 systemd[1]: Finished ignition-files.service. May 8 00:44:24.046155 kernel: audit: type=1130 audit(1746665064.041:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.046173 kernel: audit: type=1131 audit(1746665064.042:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.035879 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:44:24.049260 kernel: audit: type=1130 audit(1746665064.046:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.036716 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:44:24.051086 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:44:24.037365 systemd[1]: Starting ignition-quench.service... May 8 00:44:24.053225 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:44:24.041146 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:44:24.041224 systemd[1]: Finished ignition-quench.service. May 8 00:44:24.043755 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:44:24.046870 systemd[1]: Reached target ignition-complete.target. May 8 00:44:24.050494 systemd[1]: Starting initrd-parse-etc.service... May 8 00:44:24.064355 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:44:24.064456 systemd[1]: Finished initrd-parse-etc.service. May 8 00:44:24.070047 kernel: audit: type=1130 audit(1746665064.065:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.070070 kernel: audit: type=1131 audit(1746665064.065:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.065849 systemd[1]: Reached target initrd-fs.target. May 8 00:44:24.070793 systemd[1]: Reached target initrd.target. May 8 00:44:24.071743 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:44:24.072598 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:44:24.083569 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:44:24.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.085086 systemd[1]: Starting initrd-cleanup.service... May 8 00:44:24.087416 kernel: audit: type=1130 audit(1746665064.084:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.093749 systemd[1]: Stopped target nss-lookup.target. May 8 00:44:24.094415 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:44:24.095470 systemd[1]: Stopped target timers.target. May 8 00:44:24.096465 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:44:24.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.099688 kernel: audit: type=1131 audit(1746665064.096:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.096569 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:44:24.097511 systemd[1]: Stopped target initrd.target. May 8 00:44:24.100251 systemd[1]: Stopped target basic.target. May 8 00:44:24.101258 systemd[1]: Stopped target ignition-complete.target. May 8 00:44:24.102276 systemd[1]: Stopped target ignition-diskful.target. May 8 00:44:24.103401 systemd[1]: Stopped target initrd-root-device.target. May 8 00:44:24.104490 systemd[1]: Stopped target remote-fs.target. May 8 00:44:24.105549 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:44:24.106611 systemd[1]: Stopped target sysinit.target. May 8 00:44:24.107755 systemd[1]: Stopped target local-fs.target. May 8 00:44:24.108740 systemd[1]: Stopped target local-fs-pre.target. May 8 00:44:24.109727 systemd[1]: Stopped target swap.target. May 8 00:44:24.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.110664 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:44:24.114933 kernel: audit: type=1131 audit(1746665064.111:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.110780 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:44:24.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.111787 systemd[1]: Stopped target cryptsetup.target. May 8 00:44:24.118642 kernel: audit: type=1131 audit(1746665064.114:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.114370 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:44:24.114466 systemd[1]: Stopped dracut-initqueue.service. May 8 00:44:24.115551 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:44:24.115643 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:44:24.118331 systemd[1]: Stopped target paths.target. May 8 00:44:24.119211 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:44:24.123693 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:44:24.124405 systemd[1]: Stopped target slices.target. May 8 00:44:24.125405 systemd[1]: Stopped target sockets.target. May 8 00:44:24.126334 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:44:24.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.126436 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:44:24.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.127639 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:44:24.127758 systemd[1]: Stopped ignition-files.service. May 8 00:44:24.131104 iscsid[748]: iscsid shutting down. May 8 00:44:24.129773 systemd[1]: Stopping ignition-mount.service... May 8 00:44:24.131816 systemd[1]: Stopping iscsid.service... May 8 00:44:24.132643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:44:24.132774 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:44:24.134466 systemd[1]: Stopping sysroot-boot.service... May 8 00:44:24.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.137228 ignition[898]: INFO : Ignition 2.14.0 May 8 00:44:24.137228 ignition[898]: INFO : Stage: umount May 8 00:44:24.137228 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:44:24.137228 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:44:24.137228 ignition[898]: INFO : umount: umount passed May 8 00:44:24.137228 ignition[898]: INFO : Ignition finished successfully May 8 00:44:24.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.136683 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:44:24.136830 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:44:24.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.137854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:44:24.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.137952 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:44:24.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.140460 systemd[1]: iscsid.service: Deactivated successfully. May 8 00:44:24.140555 systemd[1]: Stopped iscsid.service. May 8 00:44:24.141907 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:44:24.141980 systemd[1]: Stopped ignition-mount.service. May 8 00:44:24.143015 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:44:24.143083 systemd[1]: Closed iscsid.socket. May 8 00:44:24.143826 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:44:24.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.143865 systemd[1]: Stopped ignition-disks.service. May 8 00:44:24.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.145518 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:44:24.145561 systemd[1]: Stopped ignition-kargs.service. May 8 00:44:24.146544 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:44:24.146577 systemd[1]: Stopped ignition-setup.service. May 8 00:44:24.148516 systemd[1]: Stopping iscsiuio.service... May 8 00:44:24.152021 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:44:24.152474 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 00:44:24.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.152557 systemd[1]: Stopped iscsiuio.service. May 8 00:44:24.153378 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:44:24.153453 systemd[1]: Finished initrd-cleanup.service. May 8 00:44:24.155030 systemd[1]: Stopped target network.target. May 8 00:44:24.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.156091 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:44:24.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.156126 systemd[1]: Closed iscsiuio.socket. May 8 00:44:24.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.157209 systemd[1]: Stopping systemd-networkd.service... May 8 00:44:24.158416 systemd[1]: Stopping systemd-resolved.service... May 8 00:44:24.159054 systemd-networkd[741]: eth0: DHCPv6 lease lost May 8 00:44:24.172000 audit: BPF prog-id=9 op=UNLOAD May 8 00:44:24.160674 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:44:24.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.160773 systemd[1]: Stopped systemd-networkd.service. May 8 00:44:24.161631 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:44:24.161672 systemd[1]: Closed systemd-networkd.socket. May 8 00:44:24.163336 systemd[1]: Stopping network-cleanup.service... May 8 00:44:24.164291 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:44:24.177000 audit: BPF prog-id=6 op=UNLOAD May 8 00:44:24.164343 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:44:24.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.166513 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:44:24.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.166558 systemd[1]: Stopped systemd-sysctl.service. May 8 00:44:24.168033 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:44:24.168076 systemd[1]: Stopped systemd-modules-load.service. May 8 00:44:24.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.168940 systemd[1]: Stopping systemd-udevd.service... May 8 00:44:24.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.173307 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:44:24.173922 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:44:24.174025 systemd[1]: Stopped systemd-resolved.service. May 8 00:44:24.178304 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:44:24.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.178444 systemd[1]: Stopped systemd-udevd.service. May 8 00:44:24.179595 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:44:24.179694 systemd[1]: Stopped network-cleanup.service. May 8 00:44:24.180484 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:44:24.180518 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:44:24.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.182318 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:44:24.182357 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:44:24.183044 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:44:24.183086 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:44:24.184116 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:44:24.184152 systemd[1]: Stopped dracut-cmdline.service. May 8 00:44:24.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.185348 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:44:24.185383 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:44:24.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:24.187186 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:44:24.188378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:44:24.188434 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:44:24.193244 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:44:24.193338 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:44:24.197393 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:44:24.197488 systemd[1]: Stopped sysroot-boot.service. May 8 00:44:24.198719 systemd[1]: Reached target initrd-switch-root.target. May 8 00:44:24.199573 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:44:24.199616 systemd[1]: Stopped initrd-setup-root.service. May 8 00:44:24.201451 systemd[1]: Starting initrd-switch-root.service... May 8 00:44:24.207536 systemd[1]: Switching root. May 8 00:44:24.208000 audit: BPF prog-id=5 op=UNLOAD May 8 00:44:24.208000 audit: BPF prog-id=4 op=UNLOAD May 8 00:44:24.208000 audit: BPF prog-id=3 op=UNLOAD May 8 00:44:24.210000 audit: BPF prog-id=8 op=UNLOAD May 8 00:44:24.210000 audit: BPF prog-id=7 op=UNLOAD May 8 00:44:24.226858 systemd-journald[290]: Journal stopped May 8 00:44:26.236978 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 8 00:44:26.237037 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:44:26.237051 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:44:26.237063 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:44:26.237073 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:44:26.237083 kernel: SELinux: policy capability open_perms=1 May 8 00:44:26.237095 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:44:26.237104 kernel: SELinux: policy capability always_check_network=0 May 8 00:44:26.237115 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:44:26.237124 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:44:26.237134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:44:26.237148 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:44:26.237159 systemd[1]: Successfully loaded SELinux policy in 32.314ms. May 8 00:44:26.237176 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.848ms. May 8 00:44:26.237188 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:44:26.237199 systemd[1]: Detected virtualization kvm. May 8 00:44:26.237210 systemd[1]: Detected architecture arm64. May 8 00:44:26.237221 systemd[1]: Detected first boot. May 8 00:44:26.237231 systemd[1]: Initializing machine ID from VM UUID. May 8 00:44:26.237242 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:44:26.237254 systemd[1]: Populated /etc with preset unit settings. May 8 00:44:26.237266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:44:26.237277 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:44:26.237289 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:44:26.237300 systemd[1]: Queued start job for default target multi-user.target. May 8 00:44:26.237311 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:44:26.237322 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:44:26.237332 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:44:26.237342 systemd[1]: Created slice system-getty.slice. May 8 00:44:26.237353 systemd[1]: Created slice system-modprobe.slice. May 8 00:44:26.237363 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:44:26.237374 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:44:26.237384 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:44:26.237399 systemd[1]: Created slice user.slice. May 8 00:44:26.237410 systemd[1]: Started systemd-ask-password-console.path. May 8 00:44:26.237421 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:44:26.237432 systemd[1]: Set up automount boot.automount. May 8 00:44:26.237442 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:44:26.237453 systemd[1]: Reached target integritysetup.target. May 8 00:44:26.237463 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:44:26.237473 systemd[1]: Reached target remote-fs.target. May 8 00:44:26.237485 systemd[1]: Reached target slices.target. May 8 00:44:26.237496 systemd[1]: Reached target swap.target. May 8 00:44:26.237506 systemd[1]: Reached target torcx.target. May 8 00:44:26.237516 systemd[1]: Reached target veritysetup.target. May 8 00:44:26.237526 systemd[1]: Listening on systemd-coredump.socket. May 8 00:44:26.237537 systemd[1]: Listening on systemd-initctl.socket. May 8 00:44:26.237547 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:44:26.237557 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:44:26.237568 systemd[1]: Listening on systemd-journald.socket. May 8 00:44:26.237579 systemd[1]: Listening on systemd-networkd.socket. May 8 00:44:26.237591 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:44:26.237601 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:44:26.237611 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:44:26.237622 systemd[1]: Mounting dev-hugepages.mount... May 8 00:44:26.237632 systemd[1]: Mounting dev-mqueue.mount... May 8 00:44:26.237642 systemd[1]: Mounting media.mount... May 8 00:44:26.237661 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:44:26.237685 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:44:26.237701 systemd[1]: Mounting tmp.mount... May 8 00:44:26.237713 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:44:26.237723 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:44:26.237733 systemd[1]: Starting kmod-static-nodes.service... May 8 00:44:26.237744 systemd[1]: Starting modprobe@configfs.service... May 8 00:44:26.237760 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:44:26.237774 systemd[1]: Starting modprobe@drm.service... May 8 00:44:26.237784 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:44:26.237794 systemd[1]: Starting modprobe@fuse.service... May 8 00:44:26.237804 systemd[1]: Starting modprobe@loop.service... May 8 00:44:26.237817 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:44:26.237828 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 00:44:26.237838 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 8 00:44:26.237848 systemd[1]: Starting systemd-journald.service... May 8 00:44:26.237858 systemd[1]: Starting systemd-modules-load.service... May 8 00:44:26.237869 systemd[1]: Starting systemd-network-generator.service... May 8 00:44:26.237879 systemd[1]: Starting systemd-remount-fs.service... May 8 00:44:26.237889 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:44:26.237900 systemd[1]: Mounted dev-hugepages.mount. May 8 00:44:26.237911 systemd[1]: Mounted dev-mqueue.mount. May 8 00:44:26.237922 systemd[1]: Mounted media.mount. May 8 00:44:26.237932 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:44:26.237943 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:44:26.237953 kernel: loop: module loaded May 8 00:44:26.237963 systemd[1]: Mounted tmp.mount. May 8 00:44:26.237973 systemd[1]: Finished kmod-static-nodes.service. May 8 00:44:26.237984 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:44:26.238003 systemd[1]: Finished modprobe@configfs.service. May 8 00:44:26.238017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:44:26.238028 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:44:26.238038 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:44:26.238048 systemd[1]: Finished modprobe@drm.service. May 8 00:44:26.238060 systemd-journald[1029]: Journal started May 8 00:44:26.238105 systemd-journald[1029]: Runtime Journal (/run/log/journal/819d6b8a66b34b98aba77e52f59ec651) is 6.0M, max 48.7M, 42.6M free. May 8 00:44:26.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.235000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:44:26.235000 audit[1029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe86212d0 a2=4000 a3=1 items=0 ppid=1 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:44:26.235000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:44:26.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.241108 kernel: fuse: init (API version 7.34) May 8 00:44:26.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.242801 systemd[1]: Started systemd-journald.service. May 8 00:44:26.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.244399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:44:26.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.245217 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:44:26.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.246138 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:44:26.246337 systemd[1]: Finished modprobe@fuse.service. May 8 00:44:26.247205 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:44:26.247501 systemd[1]: Finished modprobe@loop.service. May 8 00:44:26.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.248725 systemd[1]: Finished systemd-modules-load.service. May 8 00:44:26.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.250082 systemd[1]: Finished systemd-network-generator.service. May 8 00:44:26.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.251357 systemd[1]: Finished systemd-remount-fs.service. May 8 00:44:26.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.252375 systemd[1]: Reached target network-pre.target. May 8 00:44:26.254375 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:44:26.256283 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:44:26.256904 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:44:26.258688 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:44:26.260516 systemd[1]: Starting systemd-journal-flush.service... May 8 00:44:26.262054 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:44:26.263207 systemd[1]: Starting systemd-random-seed.service... May 8 00:44:26.263970 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:44:26.266003 systemd[1]: Starting systemd-sysctl.service... May 8 00:44:26.270102 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:44:26.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.271076 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:44:26.271844 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:44:26.273931 systemd[1]: Starting systemd-sysusers.service... May 8 00:44:26.276269 systemd-journald[1029]: Time spent on flushing to /var/log/journal/819d6b8a66b34b98aba77e52f59ec651 is 12.043ms for 937 entries. May 8 00:44:26.276269 systemd-journald[1029]: System Journal (/var/log/journal/819d6b8a66b34b98aba77e52f59ec651) is 8.0M, max 195.6M, 187.6M free. May 8 00:44:26.417831 systemd-journald[1029]: Received client request to flush runtime journal. May 8 00:44:26.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.285148 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:44:26.418335 udevadm[1079]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:44:26.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.287179 systemd[1]: Starting systemd-udev-settle.service... May 8 00:44:26.288106 systemd[1]: Finished systemd-sysctl.service. May 8 00:44:26.309319 systemd[1]: Finished systemd-sysusers.service. May 8 00:44:26.311200 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:44:26.317393 systemd[1]: Finished systemd-random-seed.service. May 8 00:44:26.318191 systemd[1]: Reached target first-boot-complete.target. May 8 00:44:26.327771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:44:26.418939 systemd[1]: Finished systemd-journal-flush.service. May 8 00:44:26.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.705998 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:44:26.707880 systemd[1]: Starting systemd-udevd.service... May 8 00:44:26.726987 systemd-udevd[1089]: Using default interface naming scheme 'v252'. May 8 00:44:26.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.748402 systemd[1]: Started systemd-udevd.service. May 8 00:44:26.750576 systemd[1]: Starting systemd-networkd.service... May 8 00:44:26.755817 systemd[1]: Starting systemd-userdbd.service... May 8 00:44:26.768850 systemd[1]: Found device dev-ttyAMA0.device. May 8 00:44:26.817762 systemd[1]: Started systemd-userdbd.service. May 8 00:44:26.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.827092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:44:26.845087 systemd[1]: Finished systemd-udev-settle.service. May 8 00:44:26.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.847157 systemd[1]: Starting lvm2-activation-early.service... May 8 00:44:26.877636 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:44:26.897530 systemd-networkd[1097]: lo: Link UP May 8 00:44:26.897542 systemd-networkd[1097]: lo: Gained carrier May 8 00:44:26.897951 systemd-networkd[1097]: Enumeration completed May 8 00:44:26.898087 systemd[1]: Started systemd-networkd.service. May 8 00:44:26.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.898951 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:44:26.900160 systemd-networkd[1097]: eth0: Link UP May 8 00:44:26.900171 systemd-networkd[1097]: eth0: Gained carrier May 8 00:44:26.918619 systemd[1]: Finished lvm2-activation-early.service. May 8 00:44:26.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.919394 systemd[1]: Reached target cryptsetup.target. May 8 00:44:26.921293 systemd[1]: Starting lvm2-activation.service... May 8 00:44:26.922214 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:44:26.925130 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:44:26.957704 systemd[1]: Finished lvm2-activation.service. May 8 00:44:26.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.958446 systemd[1]: Reached target local-fs-pre.target. May 8 00:44:26.959136 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:44:26.959164 systemd[1]: Reached target local-fs.target. May 8 00:44:26.959762 systemd[1]: Reached target machines.target. May 8 00:44:26.961554 systemd[1]: Starting ldconfig.service... May 8 00:44:26.962947 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:44:26.963001 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:44:26.964041 systemd[1]: Starting systemd-boot-update.service... May 8 00:44:26.965614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:44:26.967471 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:44:26.969791 systemd[1]: Starting systemd-sysext.service... May 8 00:44:26.972958 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1128 (bootctl) May 8 00:44:26.974024 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:44:26.981870 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:44:26.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:26.997858 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:44:27.003058 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:44:27.003296 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:44:27.044665 kernel: loop0: detected capacity change from 0 to 194096 May 8 00:44:27.045634 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:44:27.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.056675 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:44:27.062339 systemd-fsck[1138]: fsck.fat 4.2 (2021-01-31) May 8 00:44:27.062339 systemd-fsck[1138]: /dev/vda1: 236 files, 117182/258078 clusters May 8 00:44:27.064058 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:44:27.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.070695 kernel: loop1: detected capacity change from 0 to 194096 May 8 00:44:27.074457 (sd-sysext)[1147]: Using extensions 'kubernetes'. May 8 00:44:27.074805 (sd-sysext)[1147]: Merged extensions into '/usr'. May 8 00:44:27.095887 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:44:27.097160 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:44:27.099035 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:44:27.100931 systemd[1]: Starting modprobe@loop.service... May 8 00:44:27.101621 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:44:27.101794 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:44:27.102511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:44:27.102695 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:44:27.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.103955 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:44:27.104097 systemd[1]: Finished modprobe@loop.service. May 8 00:44:27.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.105311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:44:27.105464 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:44:27.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.106571 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:44:27.106682 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:44:27.167547 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:44:27.171131 systemd[1]: Finished ldconfig.service. May 8 00:44:27.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.217920 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:44:27.219621 systemd[1]: Mounting boot.mount... May 8 00:44:27.221323 systemd[1]: Mounting usr-share-oem.mount... May 8 00:44:27.227543 systemd[1]: Mounted boot.mount. May 8 00:44:27.228367 systemd[1]: Mounted usr-share-oem.mount. May 8 00:44:27.230148 systemd[1]: Finished systemd-sysext.service. May 8 00:44:27.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.232961 systemd[1]: Starting ensure-sysext.service... May 8 00:44:27.234703 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:44:27.235832 systemd[1]: Finished systemd-boot-update.service. May 8 00:44:27.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.239775 systemd[1]: Reloading. May 8 00:44:27.243781 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:44:27.244436 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:44:27.245705 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:44:27.271788 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-05-08T00:44:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:44:27.271816 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-05-08T00:44:27Z" level=info msg="torcx already run" May 8 00:44:27.337117 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:44:27.337135 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:44:27.352300 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:44:27.397355 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:44:27.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.401194 systemd[1]: Starting audit-rules.service... May 8 00:44:27.402863 systemd[1]: Starting clean-ca-certificates.service... May 8 00:44:27.404542 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:44:27.406870 systemd[1]: Starting systemd-resolved.service... May 8 00:44:27.409159 systemd[1]: Starting systemd-timesyncd.service... May 8 00:44:27.411048 systemd[1]: Starting systemd-update-utmp.service... May 8 00:44:27.412357 systemd[1]: Finished clean-ca-certificates.service. May 8 00:44:27.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.417334 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:44:27.418439 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:44:27.420284 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:44:27.422034 systemd[1]: Starting modprobe@loop.service... May 8 00:44:27.422714 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:44:27.422844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:44:27.422953 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:44:27.424005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:44:27.424164 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:44:27.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.425220 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:44:27.425371 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:44:27.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.426359 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:44:27.427000 audit[1238]: SYSTEM_BOOT pid=1238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:44:27.427788 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:44:27.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.427941 systemd[1]: Finished modprobe@loop.service. May 8 00:44:27.430463 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:44:27.435385 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:44:27.437023 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:44:27.437625 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:44:27.437805 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:44:27.437942 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:44:27.439136 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:44:27.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.440375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:44:27.440507 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:44:27.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.441694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:44:27.441848 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:44:27.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.444916 systemd[1]: Finished systemd-update-utmp.service. May 8 00:44:27.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.446934 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:44:27.447890 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:44:27.449996 systemd[1]: Starting modprobe@drm.service... May 8 00:44:27.451770 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:44:27.453298 systemd[1]: Starting modprobe@loop.service... May 8 00:44:27.454034 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:44:27.454089 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:44:27.455144 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:44:27.457066 systemd[1]: Starting systemd-update-done.service... May 8 00:44:27.457880 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:44:27.458800 systemd[1]: Finished ensure-sysext.service. May 8 00:44:27.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.459695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:44:27.459839 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:44:27.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.460816 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:44:27.460948 systemd[1]: Finished modprobe@drm.service. May 8 00:44:27.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.461879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:44:27.462013 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:44:27.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.462909 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:44:27.463058 systemd[1]: Finished modprobe@loop.service. May 8 00:44:27.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.464557 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:44:27.464598 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:44:27.468906 systemd[1]: Finished systemd-update-done.service. May 8 00:44:27.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:44:27.483907 augenrules[1280]: No rules May 8 00:44:27.483000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:44:27.483000 audit[1280]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2c5be70 a2=420 a3=0 items=0 ppid=1231 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:44:27.483000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:44:27.484567 systemd[1]: Finished audit-rules.service. May 8 00:44:27.495960 systemd[1]: Started systemd-timesyncd.service. May 8 00:44:27.496852 systemd-timesyncd[1237]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:44:27.496911 systemd-timesyncd[1237]: Initial clock synchronization to Thu 2025-05-08 00:44:27.586660 UTC. May 8 00:44:27.497008 systemd[1]: Reached target time-set.target. May 8 00:44:27.499761 systemd-resolved[1236]: Positive Trust Anchors: May 8 00:44:27.501797 systemd-resolved[1236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:44:27.501884 systemd-resolved[1236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:44:27.515471 systemd-resolved[1236]: Defaulting to hostname 'linux'. May 8 00:44:27.520576 systemd[1]: Started systemd-resolved.service. May 8 00:44:27.521264 systemd[1]: Reached target network.target. May 8 00:44:27.521845 systemd[1]: Reached target nss-lookup.target. May 8 00:44:27.522409 systemd[1]: Reached target sysinit.target. May 8 00:44:27.523070 systemd[1]: Started motdgen.path. May 8 00:44:27.523596 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:44:27.524565 systemd[1]: Started logrotate.timer. May 8 00:44:27.525225 systemd[1]: Started mdadm.timer. May 8 00:44:27.525744 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:44:27.526345 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:44:27.526373 systemd[1]: Reached target paths.target. May 8 00:44:27.526922 systemd[1]: Reached target timers.target. May 8 00:44:27.527761 systemd[1]: Listening on dbus.socket. May 8 00:44:27.529373 systemd[1]: Starting docker.socket... May 8 00:44:27.530929 systemd[1]: Listening on sshd.socket. May 8 00:44:27.531576 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:44:27.531895 systemd[1]: Listening on docker.socket. May 8 00:44:27.532473 systemd[1]: Reached target sockets.target. May 8 00:44:27.533060 systemd[1]: Reached target basic.target. May 8 00:44:27.533747 systemd[1]: System is tainted: cgroupsv1 May 8 00:44:27.533807 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:44:27.533832 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:44:27.534784 systemd[1]: Starting containerd.service... May 8 00:44:27.536371 systemd[1]: Starting dbus.service... May 8 00:44:27.538003 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:44:27.539787 systemd[1]: Starting extend-filesystems.service... May 8 00:44:27.540501 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:44:27.541794 systemd[1]: Starting motdgen.service... May 8 00:44:27.543483 systemd[1]: Starting prepare-helm.service... May 8 00:44:27.545419 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:44:27.547929 systemd[1]: Starting sshd-keygen.service... May 8 00:44:27.550375 systemd[1]: Starting systemd-logind.service... May 8 00:44:27.554270 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:44:27.554336 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:44:27.555561 systemd[1]: Starting update-engine.service... May 8 00:44:27.557422 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:44:27.560477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:44:27.560733 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:44:27.561289 jq[1311]: true May 8 00:44:27.564669 jq[1292]: false May 8 00:44:27.569689 dbus-daemon[1291]: [system] SELinux support is enabled May 8 00:44:27.571606 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:44:27.571896 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:44:27.572088 systemd[1]: Started dbus.service. May 8 00:44:27.574773 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:44:27.574989 systemd[1]: Finished motdgen.service. May 8 00:44:27.576455 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:44:27.576485 systemd[1]: Reached target system-config.target. May 8 00:44:27.577407 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:44:27.577424 systemd[1]: Reached target user-config.target. May 8 00:44:27.579671 jq[1322]: true May 8 00:44:27.584553 tar[1313]: linux-arm64/helm May 8 00:44:27.593338 extend-filesystems[1293]: Found loop1 May 8 00:44:27.593338 extend-filesystems[1293]: Found vda May 8 00:44:27.593338 extend-filesystems[1293]: Found vda1 May 8 00:44:27.593338 extend-filesystems[1293]: Found vda2 May 8 00:44:27.593338 extend-filesystems[1293]: Found vda3 May 8 00:44:27.593338 extend-filesystems[1293]: Found usr May 8 00:44:27.593338 extend-filesystems[1293]: Found vda4 May 8 00:44:27.593338 extend-filesystems[1293]: Found vda6 May 8 00:44:27.593338 extend-filesystems[1293]: Found vda7 May 8 00:44:27.593338 extend-filesystems[1293]: Found vda9 May 8 00:44:27.593338 extend-filesystems[1293]: Checking size of /dev/vda9 May 8 00:44:27.638795 extend-filesystems[1293]: Resized partition /dev/vda9 May 8 00:44:27.639783 update_engine[1309]: I0508 00:44:27.635838 1309 main.cc:92] Flatcar Update Engine starting May 8 00:44:27.641833 extend-filesystems[1349]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:44:27.644874 systemd-logind[1304]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:44:27.646296 systemd-logind[1304]: New seat seat0. May 8 00:44:27.649397 systemd[1]: Started update-engine.service. May 8 00:44:27.652476 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:44:27.652519 update_engine[1309]: I0508 00:44:27.649451 1309 update_check_scheduler.cc:74] Next update check in 8m37s May 8 00:44:27.653389 bash[1345]: Updated "/home/core/.ssh/authorized_keys" May 8 00:44:27.655419 systemd[1]: Started locksmithd.service. May 8 00:44:27.656474 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:44:27.658744 systemd[1]: Started systemd-logind.service. May 8 00:44:27.677668 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:44:27.688006 env[1315]: time="2025-05-08T00:44:27.687576200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:44:27.689856 extend-filesystems[1349]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:44:27.689856 extend-filesystems[1349]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:44:27.689856 extend-filesystems[1349]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:44:27.693178 extend-filesystems[1293]: Resized filesystem in /dev/vda9 May 8 00:44:27.692931 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:44:27.693162 systemd[1]: Finished extend-filesystems.service. May 8 00:44:27.716889 env[1315]: time="2025-05-08T00:44:27.716739520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:44:27.717229 env[1315]: time="2025-05-08T00:44:27.717180640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:44:27.719067 env[1315]: time="2025-05-08T00:44:27.718955600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:44:27.719067 env[1315]: time="2025-05-08T00:44:27.718988200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:44:27.719490 env[1315]: time="2025-05-08T00:44:27.719461000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:44:27.719567 env[1315]: time="2025-05-08T00:44:27.719551280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:44:27.719632 env[1315]: time="2025-05-08T00:44:27.719614960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:44:27.719713 env[1315]: time="2025-05-08T00:44:27.719697320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:44:27.719870 env[1315]: time="2025-05-08T00:44:27.719851160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:44:27.720227 env[1315]: time="2025-05-08T00:44:27.720204520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:44:27.720567 env[1315]: time="2025-05-08T00:44:27.720543080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:44:27.720781 env[1315]: time="2025-05-08T00:44:27.720762080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:44:27.720905 env[1315]: time="2025-05-08T00:44:27.720885880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:44:27.721425 env[1315]: time="2025-05-08T00:44:27.721103600Z" level=info msg="metadata content store policy set" policy=shared May 8 00:44:27.724803 locksmithd[1352]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:44:27.728344 env[1315]: time="2025-05-08T00:44:27.728307680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:44:27.728344 env[1315]: time="2025-05-08T00:44:27.728345680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:44:27.728425 env[1315]: time="2025-05-08T00:44:27.728362320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:44:27.728425 env[1315]: time="2025-05-08T00:44:27.728392160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728425 env[1315]: time="2025-05-08T00:44:27.728406640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728493 env[1315]: time="2025-05-08T00:44:27.728425640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728493 env[1315]: time="2025-05-08T00:44:27.728439640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728817 env[1315]: time="2025-05-08T00:44:27.728793440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728852 env[1315]: time="2025-05-08T00:44:27.728820120Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728852 env[1315]: time="2025-05-08T00:44:27.728836560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728893 env[1315]: time="2025-05-08T00:44:27.728849520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:44:27.728893 env[1315]: time="2025-05-08T00:44:27.728862560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:44:27.728994 env[1315]: time="2025-05-08T00:44:27.728972760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:44:27.729067 env[1315]: time="2025-05-08T00:44:27.729050040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:44:27.729341 env[1315]: time="2025-05-08T00:44:27.729321720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:44:27.729375 env[1315]: time="2025-05-08T00:44:27.729351440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729375 env[1315]: time="2025-05-08T00:44:27.729365880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:44:27.729478 env[1315]: time="2025-05-08T00:44:27.729462160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729513 env[1315]: time="2025-05-08T00:44:27.729479040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729513 env[1315]: time="2025-05-08T00:44:27.729497840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729513 env[1315]: time="2025-05-08T00:44:27.729509320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729568 env[1315]: time="2025-05-08T00:44:27.729521960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729568 env[1315]: time="2025-05-08T00:44:27.729533080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729568 env[1315]: time="2025-05-08T00:44:27.729544320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729568 env[1315]: time="2025-05-08T00:44:27.729555960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729660 env[1315]: time="2025-05-08T00:44:27.729568240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:44:27.729724 env[1315]: time="2025-05-08T00:44:27.729703520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729773 env[1315]: time="2025-05-08T00:44:27.729727280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729773 env[1315]: time="2025-05-08T00:44:27.729740120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:44:27.729773 env[1315]: time="2025-05-08T00:44:27.729759960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:44:27.729835 env[1315]: time="2025-05-08T00:44:27.729775800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:44:27.729835 env[1315]: time="2025-05-08T00:44:27.729786840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:44:27.729835 env[1315]: time="2025-05-08T00:44:27.729802720Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:44:27.729893 env[1315]: time="2025-05-08T00:44:27.729834680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:44:27.730065 env[1315]: time="2025-05-08T00:44:27.730013640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:44:27.732442 env[1315]: time="2025-05-08T00:44:27.730070200Z" level=info msg="Connect containerd service" May 8 00:44:27.732442 env[1315]: time="2025-05-08T00:44:27.730106840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:44:27.732442 env[1315]: time="2025-05-08T00:44:27.730776680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:44:27.732442 env[1315]: time="2025-05-08T00:44:27.731147000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:44:27.732442 env[1315]: time="2025-05-08T00:44:27.731196880Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:44:27.732442 env[1315]: time="2025-05-08T00:44:27.731245160Z" level=info msg="containerd successfully booted in 0.050940s" May 8 00:44:27.731341 systemd[1]: Started containerd.service. May 8 00:44:27.733419 env[1315]: time="2025-05-08T00:44:27.733370720Z" level=info msg="Start subscribing containerd event" May 8 00:44:27.733477 env[1315]: time="2025-05-08T00:44:27.733427200Z" level=info msg="Start recovering state" May 8 00:44:27.733500 env[1315]: time="2025-05-08T00:44:27.733490200Z" level=info msg="Start event monitor" May 8 00:44:27.733534 env[1315]: time="2025-05-08T00:44:27.733509400Z" level=info msg="Start snapshots syncer" May 8 00:44:27.733534 env[1315]: time="2025-05-08T00:44:27.733519560Z" level=info msg="Start cni network conf syncer for default" May 8 00:44:27.733534 env[1315]: time="2025-05-08T00:44:27.733526840Z" level=info msg="Start streaming server" May 8 00:44:27.968997 tar[1313]: linux-arm64/LICENSE May 8 00:44:27.969356 tar[1313]: linux-arm64/README.md May 8 00:44:27.973587 systemd[1]: Finished prepare-helm.service. May 8 00:44:28.233848 systemd-networkd[1097]: eth0: Gained IPv6LL May 8 00:44:28.235719 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:44:28.236971 systemd[1]: Reached target network-online.target. May 8 00:44:28.239157 systemd[1]: Starting kubelet.service... May 8 00:44:28.758912 systemd[1]: Started kubelet.service. May 8 00:44:29.239133 kubelet[1377]: E0508 00:44:29.239032 1377 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:44:29.241151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:44:29.241307 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:44:30.328352 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:44:30.345589 systemd[1]: Finished sshd-keygen.service. May 8 00:44:30.347938 systemd[1]: Starting issuegen.service... May 8 00:44:30.352636 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:44:30.352851 systemd[1]: Finished issuegen.service. May 8 00:44:30.354849 systemd[1]: Starting systemd-user-sessions.service... May 8 00:44:30.360277 systemd[1]: Finished systemd-user-sessions.service. May 8 00:44:30.362269 systemd[1]: Started getty@tty1.service. May 8 00:44:30.364014 systemd[1]: Started serial-getty@ttyAMA0.service. May 8 00:44:30.364846 systemd[1]: Reached target getty.target. May 8 00:44:30.365472 systemd[1]: Reached target multi-user.target. May 8 00:44:30.367396 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:44:30.373369 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:44:30.373571 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:44:30.374455 systemd[1]: Startup finished in 5.295s (kernel) + 6.092s (userspace) = 11.388s. May 8 00:44:31.772471 systemd[1]: Created slice system-sshd.slice. May 8 00:44:31.773780 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:59626.service. May 8 00:44:31.822561 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 59626 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:44:31.824474 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:44:31.833589 systemd-logind[1304]: New session 1 of user core. May 8 00:44:31.834407 systemd[1]: Created slice user-500.slice. May 8 00:44:31.835465 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:44:31.844662 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:44:31.846153 systemd[1]: Starting user@500.service... May 8 00:44:31.849251 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:44:31.909114 systemd[1409]: Queued start job for default target default.target. May 8 00:44:31.909356 systemd[1409]: Reached target paths.target. May 8 00:44:31.909371 systemd[1409]: Reached target sockets.target. May 8 00:44:31.909382 systemd[1409]: Reached target timers.target. May 8 00:44:31.909403 systemd[1409]: Reached target basic.target. May 8 00:44:31.909447 systemd[1409]: Reached target default.target. May 8 00:44:31.909473 systemd[1409]: Startup finished in 54ms. May 8 00:44:31.909558 systemd[1]: Started user@500.service. May 8 00:44:31.910539 systemd[1]: Started session-1.scope. May 8 00:44:31.960879 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:59640.service. May 8 00:44:32.002559 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 59640 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:44:32.004294 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:44:32.008002 systemd-logind[1304]: New session 2 of user core. May 8 00:44:32.008794 systemd[1]: Started session-2.scope. May 8 00:44:32.062964 sshd[1418]: pam_unix(sshd:session): session closed for user core May 8 00:44:32.065424 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:59652.service. May 8 00:44:32.066858 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:59640.service: Deactivated successfully. May 8 00:44:32.067913 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:44:32.068246 systemd-logind[1304]: Session 2 logged out. Waiting for processes to exit. May 8 00:44:32.068993 systemd-logind[1304]: Removed session 2. May 8 00:44:32.103183 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 59652 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:44:32.104786 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:44:32.108706 systemd-logind[1304]: New session 3 of user core. May 8 00:44:32.109003 systemd[1]: Started session-3.scope. May 8 00:44:32.159826 sshd[1423]: pam_unix(sshd:session): session closed for user core May 8 00:44:32.162045 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:59660.service. May 8 00:44:32.162564 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:59652.service: Deactivated successfully. May 8 00:44:32.163535 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:44:32.163572 systemd-logind[1304]: Session 3 logged out. Waiting for processes to exit. May 8 00:44:32.164706 systemd-logind[1304]: Removed session 3. May 8 00:44:32.199293 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 59660 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:44:32.200450 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:44:32.203520 systemd-logind[1304]: New session 4 of user core. May 8 00:44:32.204303 systemd[1]: Started session-4.scope. May 8 00:44:32.258808 sshd[1430]: pam_unix(sshd:session): session closed for user core May 8 00:44:32.261218 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:59676.service. May 8 00:44:32.262481 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:59660.service: Deactivated successfully. May 8 00:44:32.263408 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:44:32.263433 systemd-logind[1304]: Session 4 logged out. Waiting for processes to exit. May 8 00:44:32.264446 systemd-logind[1304]: Removed session 4. May 8 00:44:32.298600 sshd[1437]: Accepted publickey for core from 10.0.0.1 port 59676 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:44:32.299906 sshd[1437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:44:32.303773 systemd-logind[1304]: New session 5 of user core. May 8 00:44:32.304170 systemd[1]: Started session-5.scope. May 8 00:44:32.364819 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:44:32.365705 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:44:32.425771 systemd[1]: Starting docker.service... May 8 00:44:32.514422 env[1455]: time="2025-05-08T00:44:32.514372327Z" level=info msg="Starting up" May 8 00:44:32.516257 env[1455]: time="2025-05-08T00:44:32.516223600Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:44:32.516257 env[1455]: time="2025-05-08T00:44:32.516248362Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:44:32.516370 env[1455]: time="2025-05-08T00:44:32.516270869Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:44:32.516370 env[1455]: time="2025-05-08T00:44:32.516281982Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:44:32.518368 env[1455]: time="2025-05-08T00:44:32.518341376Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:44:32.518368 env[1455]: time="2025-05-08T00:44:32.518365333Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:44:32.518455 env[1455]: time="2025-05-08T00:44:32.518382123Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:44:32.518455 env[1455]: time="2025-05-08T00:44:32.518392390Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:44:32.523264 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3425695988-merged.mount: Deactivated successfully. May 8 00:44:32.695193 env[1455]: time="2025-05-08T00:44:32.695097635Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 8 00:44:32.695193 env[1455]: time="2025-05-08T00:44:32.695131094Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 8 00:44:32.695789 env[1455]: time="2025-05-08T00:44:32.695281035Z" level=info msg="Loading containers: start." May 8 00:44:32.816677 kernel: Initializing XFRM netlink socket May 8 00:44:32.841283 env[1455]: time="2025-05-08T00:44:32.841243453Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 8 00:44:32.904163 systemd-networkd[1097]: docker0: Link UP May 8 00:44:32.919804 env[1455]: time="2025-05-08T00:44:32.919761268Z" level=info msg="Loading containers: done." May 8 00:44:32.941190 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1342896099-merged.mount: Deactivated successfully. May 8 00:44:32.946409 env[1455]: time="2025-05-08T00:44:32.946319516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:44:32.947014 env[1455]: time="2025-05-08T00:44:32.946986882Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 8 00:44:32.947207 env[1455]: time="2025-05-08T00:44:32.947190736Z" level=info msg="Daemon has completed initialization" May 8 00:44:32.960148 systemd[1]: Started docker.service. May 8 00:44:32.968129 env[1455]: time="2025-05-08T00:44:32.968074537Z" level=info msg="API listen on /run/docker.sock" May 8 00:44:33.634696 env[1315]: time="2025-05-08T00:44:33.634637085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 00:44:34.194930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4094742161.mount: Deactivated successfully. May 8 00:44:35.549179 env[1315]: time="2025-05-08T00:44:35.549121371Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:35.550335 env[1315]: time="2025-05-08T00:44:35.550307381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:35.552510 env[1315]: time="2025-05-08T00:44:35.552466947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:35.555325 env[1315]: time="2025-05-08T00:44:35.555288060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:35.556008 env[1315]: time="2025-05-08T00:44:35.555972386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 8 00:44:35.564380 env[1315]: time="2025-05-08T00:44:35.564339942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 00:44:37.130344 env[1315]: time="2025-05-08T00:44:37.130293264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:37.131481 env[1315]: time="2025-05-08T00:44:37.131457302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:37.133426 env[1315]: time="2025-05-08T00:44:37.133391896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:37.135895 env[1315]: time="2025-05-08T00:44:37.135862935Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:37.136568 env[1315]: time="2025-05-08T00:44:37.136529740Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 8 00:44:37.146597 env[1315]: time="2025-05-08T00:44:37.146563038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 00:44:38.271112 env[1315]: time="2025-05-08T00:44:38.271057088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:38.273316 env[1315]: time="2025-05-08T00:44:38.273287461Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:38.275416 env[1315]: time="2025-05-08T00:44:38.275393869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:38.277098 env[1315]: time="2025-05-08T00:44:38.277067963Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:38.277970 env[1315]: time="2025-05-08T00:44:38.277944065Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 8 00:44:38.288184 env[1315]: time="2025-05-08T00:44:38.288150263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 00:44:39.413568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666442219.mount: Deactivated successfully. May 8 00:44:39.414512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:44:39.414633 systemd[1]: Stopped kubelet.service. May 8 00:44:39.416058 systemd[1]: Starting kubelet.service... May 8 00:44:39.502060 systemd[1]: Started kubelet.service. May 8 00:44:39.552498 kubelet[1617]: E0508 00:44:39.552447 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:44:39.555021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:44:39.555163 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:44:39.916574 env[1315]: time="2025-05-08T00:44:39.914024543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:39.916574 env[1315]: time="2025-05-08T00:44:39.915081429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:39.917170 env[1315]: time="2025-05-08T00:44:39.917142065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:39.918382 env[1315]: time="2025-05-08T00:44:39.918354673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:39.919358 env[1315]: time="2025-05-08T00:44:39.919329347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 8 00:44:39.928792 env[1315]: time="2025-05-08T00:44:39.928755062Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:44:40.492576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202060072.mount: Deactivated successfully. May 8 00:44:41.415446 env[1315]: time="2025-05-08T00:44:41.415398097Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.417299 env[1315]: time="2025-05-08T00:44:41.417267271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.419207 env[1315]: time="2025-05-08T00:44:41.419177086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.420925 env[1315]: time="2025-05-08T00:44:41.420900772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.421817 env[1315]: time="2025-05-08T00:44:41.421788967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:44:41.430894 env[1315]: time="2025-05-08T00:44:41.430845825Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 00:44:41.855884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872194991.mount: Deactivated successfully. May 8 00:44:41.859092 env[1315]: time="2025-05-08T00:44:41.859054058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.860701 env[1315]: time="2025-05-08T00:44:41.860664240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.862635 env[1315]: time="2025-05-08T00:44:41.862602631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.864511 env[1315]: time="2025-05-08T00:44:41.864477897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:41.865053 env[1315]: time="2025-05-08T00:44:41.865015439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 8 00:44:41.873935 env[1315]: time="2025-05-08T00:44:41.873889456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 00:44:42.363720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112291580.mount: Deactivated successfully. May 8 00:44:44.457478 env[1315]: time="2025-05-08T00:44:44.457423692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:44.459054 env[1315]: time="2025-05-08T00:44:44.459016442Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:44.461211 env[1315]: time="2025-05-08T00:44:44.461175422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:44.463023 env[1315]: time="2025-05-08T00:44:44.462997156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:44.464644 env[1315]: time="2025-05-08T00:44:44.464610253Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 8 00:44:49.806004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:44:49.806181 systemd[1]: Stopped kubelet.service. May 8 00:44:49.807696 systemd[1]: Starting kubelet.service... May 8 00:44:49.909201 systemd[1]: Started kubelet.service. May 8 00:44:49.955870 kubelet[1730]: E0508 00:44:49.955827 1730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:44:49.957552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:44:49.957758 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:44:50.571176 systemd[1]: Stopped kubelet.service. May 8 00:44:50.573204 systemd[1]: Starting kubelet.service... May 8 00:44:50.590007 systemd[1]: Reloading. May 8 00:44:50.636857 /usr/lib/systemd/system-generators/torcx-generator[1768]: time="2025-05-08T00:44:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:44:50.636888 /usr/lib/systemd/system-generators/torcx-generator[1768]: time="2025-05-08T00:44:50Z" level=info msg="torcx already run" May 8 00:44:50.748437 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:44:50.748457 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:44:50.764112 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:44:50.827042 systemd[1]: Started kubelet.service. May 8 00:44:50.829691 systemd[1]: Stopping kubelet.service... May 8 00:44:50.830238 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:44:50.830472 systemd[1]: Stopped kubelet.service. May 8 00:44:50.832699 systemd[1]: Starting kubelet.service... May 8 00:44:50.915779 systemd[1]: Started kubelet.service. May 8 00:44:50.954029 kubelet[1825]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:50.954029 kubelet[1825]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:44:50.954029 kubelet[1825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:50.954424 kubelet[1825]: I0508 00:44:50.954177 1825 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:44:51.704328 kubelet[1825]: I0508 00:44:51.704282 1825 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:44:51.704328 kubelet[1825]: I0508 00:44:51.704314 1825 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:44:51.704527 kubelet[1825]: I0508 00:44:51.704512 1825 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:44:51.725744 kubelet[1825]: I0508 00:44:51.725704 1825 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:44:51.728097 kubelet[1825]: E0508 00:44:51.728073 1825 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.734780 kubelet[1825]: I0508 00:44:51.734756 1825 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:44:51.736136 kubelet[1825]: I0508 00:44:51.736087 1825 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:44:51.736303 kubelet[1825]: I0508 00:44:51.736130 1825 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:44:51.736378 kubelet[1825]: I0508 00:44:51.736371 1825 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:44:51.736403 kubelet[1825]: I0508 00:44:51.736381 1825 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:44:51.736643 kubelet[1825]: I0508 00:44:51.736619 1825 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:51.737583 kubelet[1825]: I0508 00:44:51.737566 1825 kubelet.go:400] "Attempting to sync node with API server" May 8 00:44:51.737622 kubelet[1825]: I0508 00:44:51.737587 1825 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:44:51.737808 kubelet[1825]: I0508 00:44:51.737797 1825 kubelet.go:312] "Adding apiserver pod source" May 8 00:44:51.737888 kubelet[1825]: I0508 00:44:51.737873 1825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:44:51.738496 kubelet[1825]: W0508 00:44:51.738454 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.738601 kubelet[1825]: E0508 00:44:51.738588 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.739069 kubelet[1825]: I0508 00:44:51.739052 1825 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:44:51.739201 kubelet[1825]: W0508 00:44:51.739071 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.739249 kubelet[1825]: E0508 00:44:51.739203 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.739588 kubelet[1825]: I0508 00:44:51.739574 1825 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:44:51.739763 kubelet[1825]: W0508 00:44:51.739752 1825 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:44:51.740602 kubelet[1825]: I0508 00:44:51.740583 1825 server.go:1264] "Started kubelet" May 8 00:44:51.742528 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 8 00:44:51.742982 kubelet[1825]: I0508 00:44:51.742962 1825 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:44:51.744189 kubelet[1825]: I0508 00:44:51.744141 1825 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:44:51.745184 kubelet[1825]: I0508 00:44:51.745150 1825 server.go:455] "Adding debug handlers to kubelet server" May 8 00:44:51.746116 kubelet[1825]: I0508 00:44:51.746057 1825 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:44:51.746282 kubelet[1825]: I0508 00:44:51.746261 1825 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:44:51.747934 kubelet[1825]: E0508 00:44:51.747885 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:51.748032 kubelet[1825]: I0508 00:44:51.748016 1825 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:44:51.748493 kubelet[1825]: E0508 00:44:51.747981 1825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66a58ec56483 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:44:51.740558467 +0000 UTC m=+0.820883015,LastTimestamp:2025-05-08 00:44:51.740558467 +0000 UTC m=+0.820883015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:44:51.748493 kubelet[1825]: I0508 00:44:51.748136 1825 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:44:51.748493 kubelet[1825]: I0508 00:44:51.748370 1825 reconciler.go:26] "Reconciler: start to sync state" May 8 00:44:51.748838 kubelet[1825]: W0508 00:44:51.748782 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.748838 kubelet[1825]: E0508 00:44:51.748834 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.749019 kubelet[1825]: E0508 00:44:51.748986 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" May 8 00:44:51.749091 kubelet[1825]: I0508 00:44:51.749055 1825 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:44:51.749596 kubelet[1825]: E0508 00:44:51.749483 1825 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:44:51.750404 kubelet[1825]: I0508 00:44:51.750381 1825 factory.go:221] Registration of the containerd container factory successfully May 8 00:44:51.750404 kubelet[1825]: I0508 00:44:51.750401 1825 factory.go:221] Registration of the systemd container factory successfully May 8 00:44:51.760615 kubelet[1825]: I0508 00:44:51.760581 1825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:44:51.761545 kubelet[1825]: I0508 00:44:51.761530 1825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:44:51.761636 kubelet[1825]: I0508 00:44:51.761625 1825 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:44:51.761722 kubelet[1825]: I0508 00:44:51.761711 1825 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:44:51.761980 kubelet[1825]: E0508 00:44:51.761947 1825 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:44:51.764054 kubelet[1825]: W0508 00:44:51.763997 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.764180 kubelet[1825]: E0508 00:44:51.764160 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:51.772409 kubelet[1825]: I0508 00:44:51.772391 1825 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:44:51.772517 kubelet[1825]: I0508 00:44:51.772504 1825 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:44:51.772575 kubelet[1825]: I0508 00:44:51.772566 1825 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:51.849711 kubelet[1825]: I0508 00:44:51.849678 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:51.850052 kubelet[1825]: E0508 00:44:51.850023 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" May 8 00:44:51.862152 kubelet[1825]: E0508 00:44:51.862126 1825 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:44:51.872624 kubelet[1825]: I0508 00:44:51.872605 1825 policy_none.go:49] "None policy: Start" May 8 00:44:51.873486 kubelet[1825]: I0508 00:44:51.873451 1825 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:44:51.873553 kubelet[1825]: I0508 00:44:51.873496 1825 state_mem.go:35] "Initializing new in-memory state store" May 8 00:44:51.878966 kubelet[1825]: I0508 00:44:51.878939 1825 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:44:51.879123 kubelet[1825]: I0508 00:44:51.879085 1825 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:44:51.879203 kubelet[1825]: I0508 00:44:51.879186 1825 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:44:51.880404 kubelet[1825]: E0508 00:44:51.880377 1825 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:44:51.950032 kubelet[1825]: E0508 00:44:51.949995 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" May 8 00:44:52.052705 kubelet[1825]: I0508 00:44:52.051607 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:52.053396 kubelet[1825]: E0508 00:44:52.052754 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" May 8 00:44:52.062998 kubelet[1825]: I0508 00:44:52.062953 1825 topology_manager.go:215] "Topology Admit Handler" podUID="2ab686c858bce6ca4e8173b1ab4bdcc5" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:44:52.064095 kubelet[1825]: I0508 00:44:52.064070 1825 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:44:52.064874 kubelet[1825]: I0508 00:44:52.064839 1825 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:44:52.150742 kubelet[1825]: I0508 00:44:52.150695 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:52.150742 kubelet[1825]: I0508 00:44:52.150745 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ab686c858bce6ca4e8173b1ab4bdcc5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ab686c858bce6ca4e8173b1ab4bdcc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:52.150908 kubelet[1825]: I0508 00:44:52.150771 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ab686c858bce6ca4e8173b1ab4bdcc5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ab686c858bce6ca4e8173b1ab4bdcc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:52.150908 kubelet[1825]: I0508 00:44:52.150787 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:52.150908 kubelet[1825]: I0508 00:44:52.150816 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:52.150908 kubelet[1825]: I0508 00:44:52.150836 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:52.150908 kubelet[1825]: I0508 00:44:52.150853 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:44:52.151019 kubelet[1825]: I0508 00:44:52.150868 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ab686c858bce6ca4e8173b1ab4bdcc5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ab686c858bce6ca4e8173b1ab4bdcc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:52.151019 kubelet[1825]: I0508 00:44:52.150923 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:52.351218 kubelet[1825]: E0508 00:44:52.351095 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" May 8 00:44:52.369450 kubelet[1825]: E0508 00:44:52.369415 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:52.370366 env[1315]: time="2025-05-08T00:44:52.370077793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ab686c858bce6ca4e8173b1ab4bdcc5,Namespace:kube-system,Attempt:0,}" May 8 00:44:52.372277 kubelet[1825]: E0508 00:44:52.372245 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:52.372358 kubelet[1825]: E0508 00:44:52.372333 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:52.372644 env[1315]: time="2025-05-08T00:44:52.372612268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 8 00:44:52.373282 env[1315]: time="2025-05-08T00:44:52.373120059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 8 00:44:52.454479 kubelet[1825]: I0508 00:44:52.454453 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:52.454778 kubelet[1825]: E0508 00:44:52.454757 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" May 8 00:44:52.712178 kubelet[1825]: W0508 00:44:52.712058 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:52.712178 kubelet[1825]: E0508 00:44:52.712106 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:52.829829 kubelet[1825]: W0508 00:44:52.829769 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:52.829829 kubelet[1825]: E0508 00:44:52.829828 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.90:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:52.935832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139507518.mount: Deactivated successfully. May 8 00:44:52.939497 env[1315]: time="2025-05-08T00:44:52.939438754Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.941386 env[1315]: time="2025-05-08T00:44:52.941340981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.942930 env[1315]: time="2025-05-08T00:44:52.942903613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.943826 env[1315]: time="2025-05-08T00:44:52.943794859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.945313 env[1315]: time="2025-05-08T00:44:52.945286098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.946708 env[1315]: time="2025-05-08T00:44:52.946677892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.949855 env[1315]: time="2025-05-08T00:44:52.949824286Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.951219 env[1315]: time="2025-05-08T00:44:52.951186026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.951951 env[1315]: time="2025-05-08T00:44:52.951918600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.953302 env[1315]: time="2025-05-08T00:44:52.953268855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.953953 env[1315]: time="2025-05-08T00:44:52.953932838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.957911 env[1315]: time="2025-05-08T00:44:52.957039453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:44:52.973363 env[1315]: time="2025-05-08T00:44:52.971714539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:52.973363 env[1315]: time="2025-05-08T00:44:52.971748195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:52.973363 env[1315]: time="2025-05-08T00:44:52.971758359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:52.973363 env[1315]: time="2025-05-08T00:44:52.971932038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f6c1dc1da9eb5258c694fbba1083c60cacca647866390490de9d40fd90e39f5 pid=1875 runtime=io.containerd.runc.v2 May 8 00:44:52.974335 env[1315]: time="2025-05-08T00:44:52.974276586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:52.974429 env[1315]: time="2025-05-08T00:44:52.974311162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:52.974429 env[1315]: time="2025-05-08T00:44:52.974320887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:52.975043 env[1315]: time="2025-05-08T00:44:52.974644194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5fadfc8594c57d6274e7b75a3a32e2c93f190f0495e01bef685dc57513d9cfd pid=1876 runtime=io.containerd.runc.v2 May 8 00:44:52.982144 env[1315]: time="2025-05-08T00:44:52.982033320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:44:52.982231 env[1315]: time="2025-05-08T00:44:52.982182468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:44:52.982231 env[1315]: time="2025-05-08T00:44:52.982209561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:44:52.982506 env[1315]: time="2025-05-08T00:44:52.982443467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37f890a8999cc7a75c275c9f5c6ef78a8be9baf4a18104d4a599cf72ebab5756 pid=1908 runtime=io.containerd.runc.v2 May 8 00:44:53.028941 kubelet[1825]: W0508 00:44:53.028851 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:53.028941 kubelet[1825]: E0508 00:44:53.028915 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:53.051950 env[1315]: time="2025-05-08T00:44:53.051906237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f6c1dc1da9eb5258c694fbba1083c60cacca647866390490de9d40fd90e39f5\"" May 8 00:44:53.052597 env[1315]: time="2025-05-08T00:44:53.052566180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ab686c858bce6ca4e8173b1ab4bdcc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5fadfc8594c57d6274e7b75a3a32e2c93f190f0495e01bef685dc57513d9cfd\"" May 8 00:44:53.053679 kubelet[1825]: E0508 00:44:53.053372 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:53.054736 kubelet[1825]: E0508 00:44:53.054715 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:53.059525 env[1315]: time="2025-05-08T00:44:53.059489300Z" level=info msg="CreateContainer within sandbox \"8f6c1dc1da9eb5258c694fbba1083c60cacca647866390490de9d40fd90e39f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:44:53.060143 env[1315]: time="2025-05-08T00:44:53.060117230Z" level=info msg="CreateContainer within sandbox \"e5fadfc8594c57d6274e7b75a3a32e2c93f190f0495e01bef685dc57513d9cfd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:44:53.061553 env[1315]: time="2025-05-08T00:44:53.061510706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"37f890a8999cc7a75c275c9f5c6ef78a8be9baf4a18104d4a599cf72ebab5756\"" May 8 00:44:53.062497 kubelet[1825]: E0508 00:44:53.062325 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:53.064031 env[1315]: time="2025-05-08T00:44:53.063997617Z" level=info msg="CreateContainer within sandbox \"37f890a8999cc7a75c275c9f5c6ef78a8be9baf4a18104d4a599cf72ebab5756\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:44:53.078279 env[1315]: time="2025-05-08T00:44:53.078232813Z" level=info msg="CreateContainer within sandbox \"8f6c1dc1da9eb5258c694fbba1083c60cacca647866390490de9d40fd90e39f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3902f27c12b49accc3f7a6b6b92d5688fad4df633b98b2e565710efdba4ea55b\"" May 8 00:44:53.078841 env[1315]: time="2025-05-08T00:44:53.078805881Z" level=info msg="StartContainer for \"3902f27c12b49accc3f7a6b6b92d5688fad4df633b98b2e565710efdba4ea55b\"" May 8 00:44:53.083199 env[1315]: time="2025-05-08T00:44:53.083086508Z" level=info msg="CreateContainer within sandbox \"37f890a8999cc7a75c275c9f5c6ef78a8be9baf4a18104d4a599cf72ebab5756\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e42edeea54e8d9688d623e1f5cb6f024ee72e9535f9e06d5ea64bce9325eb75\"" May 8 00:44:53.083632 env[1315]: time="2025-05-08T00:44:53.083598912Z" level=info msg="StartContainer for \"6e42edeea54e8d9688d623e1f5cb6f024ee72e9535f9e06d5ea64bce9325eb75\"" May 8 00:44:53.084619 env[1315]: time="2025-05-08T00:44:53.084580343Z" level=info msg="CreateContainer within sandbox \"e5fadfc8594c57d6274e7b75a3a32e2c93f190f0495e01bef685dc57513d9cfd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7b1a6fb405ea6a49c905bcf67ef46b085aadf6587d7f5721b09fd10d554cb894\"" May 8 00:44:53.085022 env[1315]: time="2025-05-08T00:44:53.084959494Z" level=info msg="StartContainer for \"7b1a6fb405ea6a49c905bcf67ef46b085aadf6587d7f5721b09fd10d554cb894\"" May 8 00:44:53.151901 kubelet[1825]: E0508 00:44:53.151829 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="1.6s" May 8 00:44:53.163661 env[1315]: time="2025-05-08T00:44:53.163594684Z" level=info msg="StartContainer for \"6e42edeea54e8d9688d623e1f5cb6f024ee72e9535f9e06d5ea64bce9325eb75\" returns successfully" May 8 00:44:53.181845 env[1315]: time="2025-05-08T00:44:53.181789018Z" level=info msg="StartContainer for \"7b1a6fb405ea6a49c905bcf67ef46b085aadf6587d7f5721b09fd10d554cb894\" returns successfully" May 8 00:44:53.205508 env[1315]: time="2025-05-08T00:44:53.205466337Z" level=info msg="StartContainer for \"3902f27c12b49accc3f7a6b6b92d5688fad4df633b98b2e565710efdba4ea55b\" returns successfully" May 8 00:44:53.237209 kubelet[1825]: W0508 00:44:53.236290 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:53.237209 kubelet[1825]: E0508 00:44:53.236361 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.90:6443: connect: connection refused May 8 00:44:53.257044 kubelet[1825]: I0508 00:44:53.256729 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:53.257044 kubelet[1825]: E0508 00:44:53.257010 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" May 8 00:44:53.769337 kubelet[1825]: E0508 00:44:53.769315 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:53.771336 kubelet[1825]: E0508 00:44:53.771312 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:53.772502 kubelet[1825]: E0508 00:44:53.772481 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:54.774665 kubelet[1825]: E0508 00:44:54.774622 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:54.858529 kubelet[1825]: I0508 00:44:54.858488 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:55.158476 kubelet[1825]: E0508 00:44:55.158369 1825 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:44:55.306596 kubelet[1825]: I0508 00:44:55.306565 1825 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:44:55.315349 kubelet[1825]: E0508 00:44:55.315320 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:55.415842 kubelet[1825]: E0508 00:44:55.415734 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:55.516491 kubelet[1825]: E0508 00:44:55.516462 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:55.616981 kubelet[1825]: E0508 00:44:55.616949 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:55.717519 kubelet[1825]: E0508 00:44:55.717425 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:55.818677 kubelet[1825]: E0508 00:44:55.818634 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:55.919415 kubelet[1825]: E0508 00:44:55.919369 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:56.020493 kubelet[1825]: E0508 00:44:56.020399 1825 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:44:56.742501 kubelet[1825]: I0508 00:44:56.742466 1825 apiserver.go:52] "Watching apiserver" May 8 00:44:56.748687 kubelet[1825]: I0508 00:44:56.748657 1825 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:44:57.397287 systemd[1]: Reloading. May 8 00:44:57.443420 /usr/lib/systemd/system-generators/torcx-generator[2122]: time="2025-05-08T00:44:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:44:57.443793 /usr/lib/systemd/system-generators/torcx-generator[2122]: time="2025-05-08T00:44:57Z" level=info msg="torcx already run" May 8 00:44:57.502608 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:44:57.502627 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:44:57.518118 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:44:57.587302 kubelet[1825]: I0508 00:44:57.587267 1825 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:44:57.587421 systemd[1]: Stopping kubelet.service... May 8 00:44:57.611084 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:44:57.611383 systemd[1]: Stopped kubelet.service. May 8 00:44:57.613012 systemd[1]: Starting kubelet.service... May 8 00:44:57.717715 systemd[1]: Started kubelet.service. May 8 00:44:57.760102 kubelet[2176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:57.760102 kubelet[2176]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:44:57.760102 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:44:57.760480 kubelet[2176]: I0508 00:44:57.760150 2176 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:44:57.764411 kubelet[2176]: I0508 00:44:57.764382 2176 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 00:44:57.764411 kubelet[2176]: I0508 00:44:57.764405 2176 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:44:57.764576 kubelet[2176]: I0508 00:44:57.764552 2176 server.go:927] "Client rotation is on, will bootstrap in background" May 8 00:44:57.765853 kubelet[2176]: I0508 00:44:57.765831 2176 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:44:57.767771 kubelet[2176]: I0508 00:44:57.767744 2176 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:44:57.773446 kubelet[2176]: I0508 00:44:57.773425 2176 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:44:57.773917 kubelet[2176]: I0508 00:44:57.773892 2176 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:44:57.774066 kubelet[2176]: I0508 00:44:57.773921 2176 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 00:44:57.774145 kubelet[2176]: I0508 00:44:57.774072 2176 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:44:57.774145 kubelet[2176]: I0508 00:44:57.774080 2176 container_manager_linux.go:301] "Creating device plugin manager" May 8 00:44:57.774145 kubelet[2176]: I0508 00:44:57.774109 2176 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:57.774213 kubelet[2176]: I0508 00:44:57.774201 2176 kubelet.go:400] "Attempting to sync node with API server" May 8 00:44:57.774237 kubelet[2176]: I0508 00:44:57.774213 2176 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:44:57.774261 kubelet[2176]: I0508 00:44:57.774238 2176 kubelet.go:312] "Adding apiserver pod source" May 8 00:44:57.774261 kubelet[2176]: I0508 00:44:57.774252 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:44:57.779323 kubelet[2176]: I0508 00:44:57.775020 2176 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:44:57.779323 kubelet[2176]: I0508 00:44:57.775163 2176 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:44:57.779323 kubelet[2176]: I0508 00:44:57.775482 2176 server.go:1264] "Started kubelet" May 8 00:44:57.779323 kubelet[2176]: I0508 00:44:57.777479 2176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:44:57.784578 kubelet[2176]: I0508 00:44:57.781968 2176 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:44:57.784578 kubelet[2176]: I0508 00:44:57.783365 2176 server.go:455] "Adding debug handlers to kubelet server" May 8 00:44:57.784578 kubelet[2176]: I0508 00:44:57.784161 2176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:44:57.784578 kubelet[2176]: I0508 00:44:57.784392 2176 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:44:57.786024 kubelet[2176]: I0508 00:44:57.785998 2176 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 00:44:57.790488 kubelet[2176]: I0508 00:44:57.790458 2176 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:44:57.790824 kubelet[2176]: I0508 00:44:57.790798 2176 reconciler.go:26] "Reconciler: start to sync state" May 8 00:44:57.795746 kubelet[2176]: I0508 00:44:57.795713 2176 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:44:57.795954 kubelet[2176]: I0508 00:44:57.795893 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:44:57.800184 kubelet[2176]: I0508 00:44:57.800165 2176 factory.go:221] Registration of the containerd container factory successfully May 8 00:44:57.800262 kubelet[2176]: I0508 00:44:57.800252 2176 factory.go:221] Registration of the systemd container factory successfully May 8 00:44:57.807416 kubelet[2176]: E0508 00:44:57.805761 2176 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:44:57.813106 kubelet[2176]: I0508 00:44:57.813064 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:44:57.813106 kubelet[2176]: I0508 00:44:57.813102 2176 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:44:57.813229 kubelet[2176]: I0508 00:44:57.813118 2176 kubelet.go:2337] "Starting kubelet main sync loop" May 8 00:44:57.813229 kubelet[2176]: E0508 00:44:57.813180 2176 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:44:57.845681 kubelet[2176]: I0508 00:44:57.845621 2176 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:44:57.845681 kubelet[2176]: I0508 00:44:57.845669 2176 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:44:57.845681 kubelet[2176]: I0508 00:44:57.845691 2176 state_mem.go:36] "Initialized new in-memory state store" May 8 00:44:57.845854 kubelet[2176]: I0508 00:44:57.845837 2176 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:44:57.845881 kubelet[2176]: I0508 00:44:57.845846 2176 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:44:57.845881 kubelet[2176]: I0508 00:44:57.845863 2176 policy_none.go:49] "None policy: Start" May 8 00:44:57.846470 kubelet[2176]: I0508 00:44:57.846454 2176 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:44:57.846531 kubelet[2176]: I0508 00:44:57.846477 2176 state_mem.go:35] "Initializing new in-memory state store" May 8 00:44:57.846638 kubelet[2176]: I0508 00:44:57.846623 2176 state_mem.go:75] "Updated machine memory state" May 8 00:44:57.847879 kubelet[2176]: I0508 00:44:57.847852 2176 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:44:57.848056 kubelet[2176]: I0508 00:44:57.848012 2176 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:44:57.848137 kubelet[2176]: I0508 00:44:57.848119 2176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:44:57.889472 kubelet[2176]: I0508 00:44:57.889431 2176 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 8 00:44:57.896374 kubelet[2176]: I0508 00:44:57.896340 2176 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 8 00:44:57.897170 kubelet[2176]: I0508 00:44:57.896422 2176 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 8 00:44:57.913418 kubelet[2176]: I0508 00:44:57.913384 2176 topology_manager.go:215] "Topology Admit Handler" podUID="2ab686c858bce6ca4e8173b1ab4bdcc5" podNamespace="kube-system" podName="kube-apiserver-localhost" May 8 00:44:57.913530 kubelet[2176]: I0508 00:44:57.913499 2176 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 8 00:44:57.913562 kubelet[2176]: I0508 00:44:57.913537 2176 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 8 00:44:57.992133 kubelet[2176]: I0508 00:44:57.992031 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ab686c858bce6ca4e8173b1ab4bdcc5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ab686c858bce6ca4e8173b1ab4bdcc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:57.992300 kubelet[2176]: I0508 00:44:57.992279 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:57.992410 kubelet[2176]: I0508 00:44:57.992391 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:57.992491 kubelet[2176]: I0508 00:44:57.992478 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 8 00:44:57.992557 kubelet[2176]: I0508 00:44:57.992545 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ab686c858bce6ca4e8173b1ab4bdcc5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ab686c858bce6ca4e8173b1ab4bdcc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:57.992623 kubelet[2176]: I0508 00:44:57.992611 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ab686c858bce6ca4e8173b1ab4bdcc5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ab686c858bce6ca4e8173b1ab4bdcc5\") " pod="kube-system/kube-apiserver-localhost" May 8 00:44:57.992712 kubelet[2176]: I0508 00:44:57.992698 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:57.992805 kubelet[2176]: I0508 00:44:57.992791 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:57.992885 kubelet[2176]: I0508 00:44:57.992873 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:44:58.251347 kubelet[2176]: E0508 00:44:58.251241 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:58.251532 kubelet[2176]: E0508 00:44:58.251500 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:58.251920 kubelet[2176]: E0508 00:44:58.251901 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:58.403685 sudo[2209]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:44:58.403917 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 8 00:44:58.775224 kubelet[2176]: I0508 00:44:58.775192 2176 apiserver.go:52] "Watching apiserver" May 8 00:44:58.790893 kubelet[2176]: I0508 00:44:58.790860 2176 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:44:58.823530 kubelet[2176]: E0508 00:44:58.823491 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:58.837438 kubelet[2176]: E0508 00:44:58.837401 2176 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:44:58.837900 kubelet[2176]: E0508 00:44:58.837873 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:58.838679 kubelet[2176]: E0508 00:44:58.838307 2176 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:44:58.842037 kubelet[2176]: E0508 00:44:58.842012 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:58.843733 kubelet[2176]: I0508 00:44:58.843192 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.843157698 podStartE2EDuration="1.843157698s" podCreationTimestamp="2025-05-08 00:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:44:58.842984583 +0000 UTC m=+1.116479007" watchObservedRunningTime="2025-05-08 00:44:58.843157698 +0000 UTC m=+1.116652122" May 8 00:44:58.857477 kubelet[2176]: I0508 00:44:58.857426 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.857410893 podStartE2EDuration="1.857410893s" podCreationTimestamp="2025-05-08 00:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:44:58.849113636 +0000 UTC m=+1.122608060" watchObservedRunningTime="2025-05-08 00:44:58.857410893 +0000 UTC m=+1.130905277" May 8 00:44:58.879136 sudo[2209]: pam_unix(sudo:session): session closed for user root May 8 00:44:59.825551 kubelet[2176]: E0508 00:44:59.825511 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:44:59.826051 kubelet[2176]: E0508 00:44:59.826033 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:00.826832 kubelet[2176]: E0508 00:45:00.826797 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:01.227611 kubelet[2176]: E0508 00:45:01.227485 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:01.335034 sudo[1443]: pam_unix(sudo:session): session closed for user root May 8 00:45:01.338339 sshd[1437]: pam_unix(sshd:session): session closed for user core May 8 00:45:01.341049 systemd-logind[1304]: Session 5 logged out. Waiting for processes to exit. May 8 00:45:01.341254 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:59676.service: Deactivated successfully. May 8 00:45:01.342102 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:45:01.342490 systemd-logind[1304]: Removed session 5. May 8 00:45:03.681282 kubelet[2176]: E0508 00:45:03.680973 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:03.709443 kubelet[2176]: I0508 00:45:03.709384 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.709366953 podStartE2EDuration="6.709366953s" podCreationTimestamp="2025-05-08 00:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:44:58.85778629 +0000 UTC m=+1.131280714" watchObservedRunningTime="2025-05-08 00:45:03.709366953 +0000 UTC m=+5.982861377" May 8 00:45:03.831860 kubelet[2176]: E0508 00:45:03.831832 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:08.713983 kubelet[2176]: E0508 00:45:08.713949 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:08.837102 kubelet[2176]: E0508 00:45:08.837063 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:11.236875 kubelet[2176]: E0508 00:45:11.236839 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:11.605182 kubelet[2176]: I0508 00:45:11.605163 2176 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:45:11.605787 env[1315]: time="2025-05-08T00:45:11.605688214Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:45:11.606091 kubelet[2176]: I0508 00:45:11.605867 2176 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:45:12.384452 kubelet[2176]: I0508 00:45:12.384414 2176 topology_manager.go:215] "Topology Admit Handler" podUID="4256991c-87e6-4bbb-a25f-c1e9ad19d1d8" podNamespace="kube-system" podName="kube-proxy-8lwds" May 8 00:45:12.393782 kubelet[2176]: I0508 00:45:12.393751 2176 topology_manager.go:215] "Topology Admit Handler" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" podNamespace="kube-system" podName="cilium-vzm25" May 8 00:45:12.494241 kubelet[2176]: I0508 00:45:12.494199 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4256991c-87e6-4bbb-a25f-c1e9ad19d1d8-lib-modules\") pod \"kube-proxy-8lwds\" (UID: \"4256991c-87e6-4bbb-a25f-c1e9ad19d1d8\") " pod="kube-system/kube-proxy-8lwds" May 8 00:45:12.494438 kubelet[2176]: I0508 00:45:12.494421 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hubble-tls\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.494520 kubelet[2176]: I0508 00:45:12.494508 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hostproc\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.494607 kubelet[2176]: I0508 00:45:12.494594 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cni-path\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.494716 kubelet[2176]: I0508 00:45:12.494703 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4256991c-87e6-4bbb-a25f-c1e9ad19d1d8-xtables-lock\") pod \"kube-proxy-8lwds\" (UID: \"4256991c-87e6-4bbb-a25f-c1e9ad19d1d8\") " pod="kube-system/kube-proxy-8lwds" May 8 00:45:12.494795 kubelet[2176]: I0508 00:45:12.494783 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-lib-modules\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.494894 kubelet[2176]: I0508 00:45:12.494879 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-xtables-lock\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.494990 kubelet[2176]: I0508 00:45:12.494977 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-config-path\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495063 kubelet[2176]: I0508 00:45:12.495051 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-clustermesh-secrets\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495139 kubelet[2176]: I0508 00:45:12.495126 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdbf\" (UniqueName: \"kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-kube-api-access-ksdbf\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495216 kubelet[2176]: I0508 00:45:12.495203 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-run\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495296 kubelet[2176]: I0508 00:45:12.495281 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-net\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495375 kubelet[2176]: I0508 00:45:12.495362 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-cgroup\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495456 kubelet[2176]: I0508 00:45:12.495443 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-kernel\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495539 kubelet[2176]: I0508 00:45:12.495526 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxx2g\" (UniqueName: \"kubernetes.io/projected/4256991c-87e6-4bbb-a25f-c1e9ad19d1d8-kube-api-access-lxx2g\") pod \"kube-proxy-8lwds\" (UID: \"4256991c-87e6-4bbb-a25f-c1e9ad19d1d8\") " pod="kube-system/kube-proxy-8lwds" May 8 00:45:12.495620 kubelet[2176]: I0508 00:45:12.495608 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-etc-cni-netd\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.495723 kubelet[2176]: I0508 00:45:12.495706 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4256991c-87e6-4bbb-a25f-c1e9ad19d1d8-kube-proxy\") pod \"kube-proxy-8lwds\" (UID: \"4256991c-87e6-4bbb-a25f-c1e9ad19d1d8\") " pod="kube-system/kube-proxy-8lwds" May 8 00:45:12.495797 kubelet[2176]: I0508 00:45:12.495784 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-bpf-maps\") pod \"cilium-vzm25\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " pod="kube-system/cilium-vzm25" May 8 00:45:12.656416 kubelet[2176]: I0508 00:45:12.656300 2176 topology_manager.go:215] "Topology Admit Handler" podUID="e264a0c7-8b56-4e83-9ffc-f4c0def1decb" podNamespace="kube-system" podName="cilium-operator-599987898-jrpn2" May 8 00:45:12.687821 kubelet[2176]: E0508 00:45:12.687785 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:12.688383 env[1315]: time="2025-05-08T00:45:12.688339080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lwds,Uid:4256991c-87e6-4bbb-a25f-c1e9ad19d1d8,Namespace:kube-system,Attempt:0,}" May 8 00:45:12.700731 kubelet[2176]: I0508 00:45:12.700694 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8mf\" (UniqueName: \"kubernetes.io/projected/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-kube-api-access-nj8mf\") pod \"cilium-operator-599987898-jrpn2\" (UID: \"e264a0c7-8b56-4e83-9ffc-f4c0def1decb\") " pod="kube-system/cilium-operator-599987898-jrpn2" May 8 00:45:12.700731 kubelet[2176]: E0508 00:45:12.700728 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:12.700856 kubelet[2176]: I0508 00:45:12.700749 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-cilium-config-path\") pod \"cilium-operator-599987898-jrpn2\" (UID: \"e264a0c7-8b56-4e83-9ffc-f4c0def1decb\") " pod="kube-system/cilium-operator-599987898-jrpn2" May 8 00:45:12.701173 env[1315]: time="2025-05-08T00:45:12.701126765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzm25,Uid:02ed2ff0-d90e-44c9-beef-cc7bfd771bed,Namespace:kube-system,Attempt:0,}" May 8 00:45:12.707838 env[1315]: time="2025-05-08T00:45:12.707774964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:12.707838 env[1315]: time="2025-05-08T00:45:12.707812407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:12.708214 env[1315]: time="2025-05-08T00:45:12.707823208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:12.708214 env[1315]: time="2025-05-08T00:45:12.708114667Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cd69c7e2b26510ae2a82ff325a9c7ae46ce2d50efdc5ff900be093f96dcb47d pid=2267 runtime=io.containerd.runc.v2 May 8 00:45:12.725013 env[1315]: time="2025-05-08T00:45:12.724947580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:12.725013 env[1315]: time="2025-05-08T00:45:12.724988102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:12.725166 env[1315]: time="2025-05-08T00:45:12.725024585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:12.725312 env[1315]: time="2025-05-08T00:45:12.725271361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862 pid=2292 runtime=io.containerd.runc.v2 May 8 00:45:12.770758 env[1315]: time="2025-05-08T00:45:12.769716859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lwds,Uid:4256991c-87e6-4bbb-a25f-c1e9ad19d1d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cd69c7e2b26510ae2a82ff325a9c7ae46ce2d50efdc5ff900be093f96dcb47d\"" May 8 00:45:12.770888 kubelet[2176]: E0508 00:45:12.770311 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:12.772242 env[1315]: time="2025-05-08T00:45:12.771933846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzm25,Uid:02ed2ff0-d90e-44c9-beef-cc7bfd771bed,Namespace:kube-system,Attempt:0,} returns sandbox id \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\"" May 8 00:45:12.773260 kubelet[2176]: E0508 00:45:12.773240 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:12.773445 env[1315]: time="2025-05-08T00:45:12.773377301Z" level=info msg="CreateContainer within sandbox \"0cd69c7e2b26510ae2a82ff325a9c7ae46ce2d50efdc5ff900be093f96dcb47d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:45:12.774770 env[1315]: time="2025-05-08T00:45:12.774742792Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:45:12.788523 env[1315]: time="2025-05-08T00:45:12.788484860Z" level=info msg="CreateContainer within sandbox \"0cd69c7e2b26510ae2a82ff325a9c7ae46ce2d50efdc5ff900be093f96dcb47d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d6320d412f6ab651a0c1bc5ea60dc793fae332384ed9430e7fa27ba00eda866e\"" May 8 00:45:12.790010 env[1315]: time="2025-05-08T00:45:12.789108981Z" level=info msg="StartContainer for \"d6320d412f6ab651a0c1bc5ea60dc793fae332384ed9430e7fa27ba00eda866e\"" May 8 00:45:12.850746 env[1315]: time="2025-05-08T00:45:12.850687092Z" level=info msg="StartContainer for \"d6320d412f6ab651a0c1bc5ea60dc793fae332384ed9430e7fa27ba00eda866e\" returns successfully" May 8 00:45:12.962205 kubelet[2176]: E0508 00:45:12.962110 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:12.962619 env[1315]: time="2025-05-08T00:45:12.962584410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jrpn2,Uid:e264a0c7-8b56-4e83-9ffc-f4c0def1decb,Namespace:kube-system,Attempt:0,}" May 8 00:45:12.984829 env[1315]: time="2025-05-08T00:45:12.984741795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:12.984829 env[1315]: time="2025-05-08T00:45:12.984797958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:12.985069 env[1315]: time="2025-05-08T00:45:12.984808759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:12.985069 env[1315]: time="2025-05-08T00:45:12.985048335Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9 pid=2408 runtime=io.containerd.runc.v2 May 8 00:45:12.997855 update_engine[1309]: I0508 00:45:12.997808 1309 update_attempter.cc:509] Updating boot flags... May 8 00:45:13.057419 env[1315]: time="2025-05-08T00:45:13.057375172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jrpn2,Uid:e264a0c7-8b56-4e83-9ffc-f4c0def1decb,Namespace:kube-system,Attempt:0,} returns sandbox id \"db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9\"" May 8 00:45:13.058813 kubelet[2176]: E0508 00:45:13.058300 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:13.849071 kubelet[2176]: E0508 00:45:13.849025 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:13.872387 kubelet[2176]: I0508 00:45:13.872330 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8lwds" podStartSLOduration=1.872314078 podStartE2EDuration="1.872314078s" podCreationTimestamp="2025-05-08 00:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:13.871961296 +0000 UTC m=+16.145455720" watchObservedRunningTime="2025-05-08 00:45:13.872314078 +0000 UTC m=+16.145808462" May 8 00:45:14.854928 kubelet[2176]: E0508 00:45:14.854367 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:17.278408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207023895.mount: Deactivated successfully. May 8 00:45:19.544698 env[1315]: time="2025-05-08T00:45:19.544643012Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:45:19.546091 env[1315]: time="2025-05-08T00:45:19.546061039Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:45:19.547806 env[1315]: time="2025-05-08T00:45:19.547780400Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:45:19.548462 env[1315]: time="2025-05-08T00:45:19.548436071Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 00:45:19.551686 env[1315]: time="2025-05-08T00:45:19.551123598Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:45:19.551882 env[1315]: time="2025-05-08T00:45:19.551589060Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:45:19.562363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513469654.mount: Deactivated successfully. May 8 00:45:19.564461 env[1315]: time="2025-05-08T00:45:19.564424906Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\"" May 8 00:45:19.564984 env[1315]: time="2025-05-08T00:45:19.564892128Z" level=info msg="StartContainer for \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\"" May 8 00:45:19.663061 env[1315]: time="2025-05-08T00:45:19.663013840Z" level=info msg="StartContainer for \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\" returns successfully" May 8 00:45:19.681762 env[1315]: time="2025-05-08T00:45:19.681719483Z" level=info msg="shim disconnected" id=e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55 May 8 00:45:19.681762 env[1315]: time="2025-05-08T00:45:19.681762325Z" level=warning msg="cleaning up after shim disconnected" id=e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55 namespace=k8s.io May 8 00:45:19.681973 env[1315]: time="2025-05-08T00:45:19.681772646Z" level=info msg="cleaning up dead shim" May 8 00:45:19.688792 env[1315]: time="2025-05-08T00:45:19.688751415Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2604 runtime=io.containerd.runc.v2\n" May 8 00:45:19.865110 kubelet[2176]: E0508 00:45:19.865085 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:19.870150 env[1315]: time="2025-05-08T00:45:19.870109936Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:45:19.880511 env[1315]: time="2025-05-08T00:45:19.880472625Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\"" May 8 00:45:19.881184 env[1315]: time="2025-05-08T00:45:19.881153138Z" level=info msg="StartContainer for \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\"" May 8 00:45:19.932854 env[1315]: time="2025-05-08T00:45:19.932812376Z" level=info msg="StartContainer for \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\" returns successfully" May 8 00:45:19.944191 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:45:19.944444 systemd[1]: Stopped systemd-sysctl.service. May 8 00:45:19.944603 systemd[1]: Stopping systemd-sysctl.service... May 8 00:45:19.946737 systemd[1]: Starting systemd-sysctl.service... May 8 00:45:19.954100 systemd[1]: Finished systemd-sysctl.service. May 8 00:45:19.966160 env[1315]: time="2025-05-08T00:45:19.966104988Z" level=info msg="shim disconnected" id=b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218 May 8 00:45:19.966160 env[1315]: time="2025-05-08T00:45:19.966151270Z" level=warning msg="cleaning up after shim disconnected" id=b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218 namespace=k8s.io May 8 00:45:19.966160 env[1315]: time="2025-05-08T00:45:19.966160950Z" level=info msg="cleaning up dead shim" May 8 00:45:19.972799 env[1315]: time="2025-05-08T00:45:19.972755742Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2668 runtime=io.containerd.runc.v2\n" May 8 00:45:20.560606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55-rootfs.mount: Deactivated successfully. May 8 00:45:20.868329 kubelet[2176]: E0508 00:45:20.868274 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:20.870425 env[1315]: time="2025-05-08T00:45:20.870378438Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:45:20.896032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652668740.mount: Deactivated successfully. May 8 00:45:20.903173 env[1315]: time="2025-05-08T00:45:20.903106115Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\"" May 8 00:45:20.903948 env[1315]: time="2025-05-08T00:45:20.903909231Z" level=info msg="StartContainer for \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\"" May 8 00:45:20.970814 env[1315]: time="2025-05-08T00:45:20.970693006Z" level=info msg="StartContainer for \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\" returns successfully" May 8 00:45:21.000136 env[1315]: time="2025-05-08T00:45:21.000092213Z" level=info msg="shim disconnected" id=332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03 May 8 00:45:21.000376 env[1315]: time="2025-05-08T00:45:21.000356425Z" level=warning msg="cleaning up after shim disconnected" id=332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03 namespace=k8s.io May 8 00:45:21.000457 env[1315]: time="2025-05-08T00:45:21.000444269Z" level=info msg="cleaning up dead shim" May 8 00:45:21.006613 env[1315]: time="2025-05-08T00:45:21.006581656Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2724 runtime=io.containerd.runc.v2\n" May 8 00:45:21.560340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03-rootfs.mount: Deactivated successfully. May 8 00:45:21.872358 kubelet[2176]: E0508 00:45:21.872323 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:21.878037 env[1315]: time="2025-05-08T00:45:21.877951897Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:45:21.891915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount462263656.mount: Deactivated successfully. May 8 00:45:21.896721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205899541.mount: Deactivated successfully. May 8 00:45:21.899930 env[1315]: time="2025-05-08T00:45:21.899891445Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\"" May 8 00:45:21.900484 env[1315]: time="2025-05-08T00:45:21.900457349Z" level=info msg="StartContainer for \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\"" May 8 00:45:21.950100 env[1315]: time="2025-05-08T00:45:21.950052892Z" level=info msg="StartContainer for \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\" returns successfully" May 8 00:45:21.964747 env[1315]: time="2025-05-08T00:45:21.964690404Z" level=info msg="shim disconnected" id=8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b May 8 00:45:21.964747 env[1315]: time="2025-05-08T00:45:21.964735006Z" level=warning msg="cleaning up after shim disconnected" id=8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b namespace=k8s.io May 8 00:45:21.964747 env[1315]: time="2025-05-08T00:45:21.964746487Z" level=info msg="cleaning up dead shim" May 8 00:45:21.972932 env[1315]: time="2025-05-08T00:45:21.972886878Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:45:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2777 runtime=io.containerd.runc.v2\n" May 8 00:45:22.876936 kubelet[2176]: E0508 00:45:22.876899 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:22.889017 env[1315]: time="2025-05-08T00:45:22.888953317Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:45:22.916451 env[1315]: time="2025-05-08T00:45:22.916403453Z" level=info msg="CreateContainer within sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\"" May 8 00:45:22.917387 env[1315]: time="2025-05-08T00:45:22.917353452Z" level=info msg="StartContainer for \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\"" May 8 00:45:23.014994 env[1315]: time="2025-05-08T00:45:23.014945668Z" level=info msg="StartContainer for \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\" returns successfully" May 8 00:45:23.185196 kubelet[2176]: I0508 00:45:23.183424 2176 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 00:45:23.200643 kubelet[2176]: I0508 00:45:23.200584 2176 topology_manager.go:215] "Topology Admit Handler" podUID="57b4f1f7-fc75-42ba-8fbf-20452f375acc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j477l" May 8 00:45:23.204605 kubelet[2176]: I0508 00:45:23.204508 2176 topology_manager.go:215] "Topology Admit Handler" podUID="678450d8-c67e-47c3-80d6-81ded9b5fb36" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jwcpw" May 8 00:45:23.269404 kubelet[2176]: I0508 00:45:23.269354 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57b4f1f7-fc75-42ba-8fbf-20452f375acc-config-volume\") pod \"coredns-7db6d8ff4d-j477l\" (UID: \"57b4f1f7-fc75-42ba-8fbf-20452f375acc\") " pod="kube-system/coredns-7db6d8ff4d-j477l" May 8 00:45:23.269544 kubelet[2176]: I0508 00:45:23.269438 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/678450d8-c67e-47c3-80d6-81ded9b5fb36-config-volume\") pod \"coredns-7db6d8ff4d-jwcpw\" (UID: \"678450d8-c67e-47c3-80d6-81ded9b5fb36\") " pod="kube-system/coredns-7db6d8ff4d-jwcpw" May 8 00:45:23.269544 kubelet[2176]: I0508 00:45:23.269485 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8hk\" (UniqueName: \"kubernetes.io/projected/57b4f1f7-fc75-42ba-8fbf-20452f375acc-kube-api-access-4g8hk\") pod \"coredns-7db6d8ff4d-j477l\" (UID: \"57b4f1f7-fc75-42ba-8fbf-20452f375acc\") " pod="kube-system/coredns-7db6d8ff4d-j477l" May 8 00:45:23.269544 kubelet[2176]: I0508 00:45:23.269507 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nk2r\" (UniqueName: \"kubernetes.io/projected/678450d8-c67e-47c3-80d6-81ded9b5fb36-kube-api-access-8nk2r\") pod \"coredns-7db6d8ff4d-jwcpw\" (UID: \"678450d8-c67e-47c3-80d6-81ded9b5fb36\") " pod="kube-system/coredns-7db6d8ff4d-jwcpw" May 8 00:45:23.309682 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 8 00:45:23.408087 env[1315]: time="2025-05-08T00:45:23.406334316Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:45:23.409789 env[1315]: time="2025-05-08T00:45:23.408801214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:45:23.413856 env[1315]: time="2025-05-08T00:45:23.413728449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:45:23.421709 env[1315]: time="2025-05-08T00:45:23.421536279Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 00:45:23.428246 env[1315]: time="2025-05-08T00:45:23.428207584Z" level=info msg="CreateContainer within sandbox \"db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:45:23.435733 env[1315]: time="2025-05-08T00:45:23.435637358Z" level=info msg="CreateContainer within sandbox \"db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\"" May 8 00:45:23.438468 env[1315]: time="2025-05-08T00:45:23.438433909Z" level=info msg="StartContainer for \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\"" May 8 00:45:23.512693 kubelet[2176]: E0508 00:45:23.511494 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:23.512794 env[1315]: time="2025-05-08T00:45:23.512078351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j477l,Uid:57b4f1f7-fc75-42ba-8fbf-20452f375acc,Namespace:kube-system,Attempt:0,}" May 8 00:45:23.513837 kubelet[2176]: E0508 00:45:23.513810 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:23.514200 env[1315]: time="2025-05-08T00:45:23.514151313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jwcpw,Uid:678450d8-c67e-47c3-80d6-81ded9b5fb36,Namespace:kube-system,Attempt:0,}" May 8 00:45:23.539523 env[1315]: time="2025-05-08T00:45:23.539370914Z" level=info msg="StartContainer for \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\" returns successfully" May 8 00:45:23.587679 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 8 00:45:23.882273 kubelet[2176]: E0508 00:45:23.882237 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:23.884878 kubelet[2176]: E0508 00:45:23.884850 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:23.898007 kubelet[2176]: I0508 00:45:23.897928 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vzm25" podStartSLOduration=5.121589041 podStartE2EDuration="11.897881058s" podCreationTimestamp="2025-05-08 00:45:12 +0000 UTC" firstStartedPulling="2025-05-08 00:45:12.773993542 +0000 UTC m=+15.047487966" lastFinishedPulling="2025-05-08 00:45:19.550285559 +0000 UTC m=+21.823779983" observedRunningTime="2025-05-08 00:45:23.896824336 +0000 UTC m=+26.170318760" watchObservedRunningTime="2025-05-08 00:45:23.897881058 +0000 UTC m=+26.171375482" May 8 00:45:24.148563 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:34670.service. May 8 00:45:24.206795 sshd[2987]: Accepted publickey for core from 10.0.0.1 port 34670 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:24.208884 sshd[2987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:24.212802 systemd-logind[1304]: New session 6 of user core. May 8 00:45:24.213675 systemd[1]: Started session-6.scope. May 8 00:45:24.384885 sshd[2987]: pam_unix(sshd:session): session closed for user core May 8 00:45:24.387474 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:34670.service: Deactivated successfully. May 8 00:45:24.388482 systemd-logind[1304]: Session 6 logged out. Waiting for processes to exit. May 8 00:45:24.388524 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:45:24.389231 systemd-logind[1304]: Removed session 6. May 8 00:45:24.886362 kubelet[2176]: E0508 00:45:24.886319 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:24.886763 kubelet[2176]: E0508 00:45:24.886639 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:25.889525 kubelet[2176]: E0508 00:45:25.889492 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:27.225583 systemd-networkd[1097]: cilium_host: Link UP May 8 00:45:27.226258 systemd-networkd[1097]: cilium_net: Link UP May 8 00:45:27.226662 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 8 00:45:27.226710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 8 00:45:27.226754 systemd-networkd[1097]: cilium_net: Gained carrier May 8 00:45:27.227404 systemd-networkd[1097]: cilium_host: Gained carrier May 8 00:45:27.315024 systemd-networkd[1097]: cilium_vxlan: Link UP May 8 00:45:27.315030 systemd-networkd[1097]: cilium_vxlan: Gained carrier May 8 00:45:27.352786 systemd-networkd[1097]: cilium_host: Gained IPv6LL May 8 00:45:27.592806 systemd-networkd[1097]: cilium_net: Gained IPv6LL May 8 00:45:27.628681 kernel: NET: Registered PF_ALG protocol family May 8 00:45:28.247720 systemd-networkd[1097]: lxc_health: Link UP May 8 00:45:28.259011 systemd-networkd[1097]: lxc_health: Gained carrier May 8 00:45:28.259680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:45:28.392841 systemd-networkd[1097]: cilium_vxlan: Gained IPv6LL May 8 00:45:28.643629 systemd-networkd[1097]: lxc0dc5d2532db2: Link UP May 8 00:45:28.653983 systemd-networkd[1097]: lxcadc180b829e4: Link UP May 8 00:45:28.655749 kernel: eth0: renamed from tmp0793e May 8 00:45:28.666747 kernel: eth0: renamed from tmp9909c May 8 00:45:28.686041 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0dc5d2532db2: link becomes ready May 8 00:45:28.685280 systemd-networkd[1097]: lxc0dc5d2532db2: Gained carrier May 8 00:45:28.688562 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcadc180b829e4: link becomes ready May 8 00:45:28.687136 systemd-networkd[1097]: lxcadc180b829e4: Gained carrier May 8 00:45:28.703390 kubelet[2176]: E0508 00:45:28.703299 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:28.720034 kubelet[2176]: I0508 00:45:28.719967 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-jrpn2" podStartSLOduration=6.3559444880000004 podStartE2EDuration="16.719932811s" podCreationTimestamp="2025-05-08 00:45:12 +0000 UTC" firstStartedPulling="2025-05-08 00:45:13.059844447 +0000 UTC m=+15.333338871" lastFinishedPulling="2025-05-08 00:45:23.42383277 +0000 UTC m=+25.697327194" observedRunningTime="2025-05-08 00:45:23.906534241 +0000 UTC m=+26.180028665" watchObservedRunningTime="2025-05-08 00:45:28.719932811 +0000 UTC m=+30.993427235" May 8 00:45:28.895252 kubelet[2176]: E0508 00:45:28.895137 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:29.388413 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:34682.service. May 8 00:45:29.432918 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 34682 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:29.434449 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:29.438045 systemd-logind[1304]: New session 7 of user core. May 8 00:45:29.439259 systemd[1]: Started session-7.scope. May 8 00:45:29.544810 systemd-networkd[1097]: lxc_health: Gained IPv6LL May 8 00:45:29.551070 sshd[3377]: pam_unix(sshd:session): session closed for user core May 8 00:45:29.553602 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:34682.service: Deactivated successfully. May 8 00:45:29.554483 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:45:29.557450 systemd-logind[1304]: Session 7 logged out. Waiting for processes to exit. May 8 00:45:29.558177 systemd-logind[1304]: Removed session 7. May 8 00:45:30.315595 systemd-networkd[1097]: lxcadc180b829e4: Gained IPv6LL May 8 00:45:30.760852 systemd-networkd[1097]: lxc0dc5d2532db2: Gained IPv6LL May 8 00:45:32.285049 env[1315]: time="2025-05-08T00:45:32.284151550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:32.285049 env[1315]: time="2025-05-08T00:45:32.284193671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:32.285049 env[1315]: time="2025-05-08T00:45:32.284203671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:32.285049 env[1315]: time="2025-05-08T00:45:32.284315634Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9909c00ddf73d3c9ad843ac45efa282697a22cadbabb16346beb509aaff959e5 pid=3414 runtime=io.containerd.runc.v2 May 8 00:45:32.285454 env[1315]: time="2025-05-08T00:45:32.285256861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:45:32.285454 env[1315]: time="2025-05-08T00:45:32.285296742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:45:32.285454 env[1315]: time="2025-05-08T00:45:32.285306783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:45:32.285522 env[1315]: time="2025-05-08T00:45:32.285443706Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0793e12f9e4998e524bf172e44da92795533cbe3910437bb3b3bf09fd246e4ff pid=3423 runtime=io.containerd.runc.v2 May 8 00:45:32.351835 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:45:32.359678 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:45:32.371080 env[1315]: time="2025-05-08T00:45:32.371035458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j477l,Uid:57b4f1f7-fc75-42ba-8fbf-20452f375acc,Namespace:kube-system,Attempt:0,} returns sandbox id \"9909c00ddf73d3c9ad843ac45efa282697a22cadbabb16346beb509aaff959e5\"" May 8 00:45:32.372004 kubelet[2176]: E0508 00:45:32.371975 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:32.381179 env[1315]: time="2025-05-08T00:45:32.381135985Z" level=info msg="CreateContainer within sandbox \"9909c00ddf73d3c9ad843ac45efa282697a22cadbabb16346beb509aaff959e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:45:32.382230 env[1315]: time="2025-05-08T00:45:32.382182415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jwcpw,Uid:678450d8-c67e-47c3-80d6-81ded9b5fb36,Namespace:kube-system,Attempt:0,} returns sandbox id \"0793e12f9e4998e524bf172e44da92795533cbe3910437bb3b3bf09fd246e4ff\"" May 8 00:45:32.382849 kubelet[2176]: E0508 00:45:32.382819 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:32.384693 env[1315]: time="2025-05-08T00:45:32.384639444Z" level=info msg="CreateContainer within sandbox \"0793e12f9e4998e524bf172e44da92795533cbe3910437bb3b3bf09fd246e4ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:45:32.400132 env[1315]: time="2025-05-08T00:45:32.400093603Z" level=info msg="CreateContainer within sandbox \"9909c00ddf73d3c9ad843ac45efa282697a22cadbabb16346beb509aaff959e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e8ab7e142a83dd1d48957df66fa78c173e8b1475f2ace486ee1ec7e65c87df8\"" May 8 00:45:32.401797 env[1315]: time="2025-05-08T00:45:32.401756771Z" level=info msg="CreateContainer within sandbox \"0793e12f9e4998e524bf172e44da92795533cbe3910437bb3b3bf09fd246e4ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef511de47ba785b76f2a8d4bbe57ab6f2dff2bd1a808f2dc975a78e1642981f5\"" May 8 00:45:32.401981 env[1315]: time="2025-05-08T00:45:32.401952216Z" level=info msg="StartContainer for \"8e8ab7e142a83dd1d48957df66fa78c173e8b1475f2ace486ee1ec7e65c87df8\"" May 8 00:45:32.402325 env[1315]: time="2025-05-08T00:45:32.402063659Z" level=info msg="StartContainer for \"ef511de47ba785b76f2a8d4bbe57ab6f2dff2bd1a808f2dc975a78e1642981f5\"" May 8 00:45:32.465806 env[1315]: time="2025-05-08T00:45:32.465723908Z" level=info msg="StartContainer for \"ef511de47ba785b76f2a8d4bbe57ab6f2dff2bd1a808f2dc975a78e1642981f5\" returns successfully" May 8 00:45:32.466280 env[1315]: time="2025-05-08T00:45:32.466234722Z" level=info msg="StartContainer for \"8e8ab7e142a83dd1d48957df66fa78c173e8b1475f2ace486ee1ec7e65c87df8\" returns successfully" May 8 00:45:32.902949 kubelet[2176]: E0508 00:45:32.902912 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:32.908351 kubelet[2176]: E0508 00:45:32.908268 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:32.945452 kubelet[2176]: I0508 00:45:32.945386 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j477l" podStartSLOduration=20.945367534 podStartE2EDuration="20.945367534s" podCreationTimestamp="2025-05-08 00:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:32.91634883 +0000 UTC m=+35.189843254" watchObservedRunningTime="2025-05-08 00:45:32.945367534 +0000 UTC m=+35.218861958" May 8 00:45:33.084596 kubelet[2176]: I0508 00:45:33.084534 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jwcpw" podStartSLOduration=21.084515932 podStartE2EDuration="21.084515932s" podCreationTimestamp="2025-05-08 00:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:45:32.945842948 +0000 UTC m=+35.219337372" watchObservedRunningTime="2025-05-08 00:45:33.084515932 +0000 UTC m=+35.358010356" May 8 00:45:33.289088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027207050.mount: Deactivated successfully. May 8 00:45:33.910426 kubelet[2176]: E0508 00:45:33.910336 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:33.910426 kubelet[2176]: E0508 00:45:33.910366 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:34.554308 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:54048.service. May 8 00:45:34.596259 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 54048 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:34.597617 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:34.601177 systemd-logind[1304]: New session 8 of user core. May 8 00:45:34.602358 systemd[1]: Started session-8.scope. May 8 00:45:34.711604 sshd[3571]: pam_unix(sshd:session): session closed for user core May 8 00:45:34.714141 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:54048.service: Deactivated successfully. May 8 00:45:34.715321 systemd-logind[1304]: Session 8 logged out. Waiting for processes to exit. May 8 00:45:34.715410 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:45:34.716240 systemd-logind[1304]: Removed session 8. May 8 00:45:34.912243 kubelet[2176]: E0508 00:45:34.912216 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:34.912690 kubelet[2176]: E0508 00:45:34.912264 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:45:39.714592 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:54062.service. May 8 00:45:39.756553 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 54062 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:39.757872 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:39.761737 systemd-logind[1304]: New session 9 of user core. May 8 00:45:39.762364 systemd[1]: Started session-9.scope. May 8 00:45:39.875880 sshd[3586]: pam_unix(sshd:session): session closed for user core May 8 00:45:39.876873 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:54068.service. May 8 00:45:39.879015 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:54062.service: Deactivated successfully. May 8 00:45:39.880133 systemd-logind[1304]: Session 9 logged out. Waiting for processes to exit. May 8 00:45:39.880181 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:45:39.881045 systemd-logind[1304]: Removed session 9. May 8 00:45:39.917059 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 54068 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:39.918413 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:39.922885 systemd-logind[1304]: New session 10 of user core. May 8 00:45:39.923859 systemd[1]: Started session-10.scope. May 8 00:45:40.073996 sshd[3599]: pam_unix(sshd:session): session closed for user core May 8 00:45:40.074312 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:54082.service. May 8 00:45:40.079405 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:54068.service: Deactivated successfully. May 8 00:45:40.081076 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:45:40.082892 systemd-logind[1304]: Session 10 logged out. Waiting for processes to exit. May 8 00:45:40.083740 systemd-logind[1304]: Removed session 10. May 8 00:45:40.123280 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:40.124643 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:40.128705 systemd-logind[1304]: New session 11 of user core. May 8 00:45:40.129247 systemd[1]: Started session-11.scope. May 8 00:45:40.244045 sshd[3612]: pam_unix(sshd:session): session closed for user core May 8 00:45:40.246686 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:54082.service: Deactivated successfully. May 8 00:45:40.247776 systemd-logind[1304]: Session 11 logged out. Waiting for processes to exit. May 8 00:45:40.247852 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:45:40.248620 systemd-logind[1304]: Removed session 11. May 8 00:45:45.248663 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:49874.service. May 8 00:45:45.288793 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 49874 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:45.291055 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:45.294559 systemd-logind[1304]: New session 12 of user core. May 8 00:45:45.295451 systemd[1]: Started session-12.scope. May 8 00:45:45.404769 sshd[3631]: pam_unix(sshd:session): session closed for user core May 8 00:45:45.407336 systemd-logind[1304]: Session 12 logged out. Waiting for processes to exit. May 8 00:45:45.407553 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:49874.service: Deactivated successfully. May 8 00:45:45.408442 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:45:45.408871 systemd-logind[1304]: Removed session 12. May 8 00:45:50.407814 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:49890.service. May 8 00:45:50.457744 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 49890 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:50.459300 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:50.462997 systemd-logind[1304]: New session 13 of user core. May 8 00:45:50.463864 systemd[1]: Started session-13.scope. May 8 00:45:50.580703 sshd[3646]: pam_unix(sshd:session): session closed for user core May 8 00:45:50.583126 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:49898.service. May 8 00:45:50.587015 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:49890.service: Deactivated successfully. May 8 00:45:50.588339 systemd-logind[1304]: Session 13 logged out. Waiting for processes to exit. May 8 00:45:50.588377 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:45:50.592365 systemd-logind[1304]: Removed session 13. May 8 00:45:50.624002 sshd[3659]: Accepted publickey for core from 10.0.0.1 port 49898 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:50.625362 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:50.628718 systemd-logind[1304]: New session 14 of user core. May 8 00:45:50.629623 systemd[1]: Started session-14.scope. May 8 00:45:50.833674 sshd[3659]: pam_unix(sshd:session): session closed for user core May 8 00:45:50.836121 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:49910.service. May 8 00:45:50.838751 systemd-logind[1304]: Session 14 logged out. Waiting for processes to exit. May 8 00:45:50.839574 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:49898.service: Deactivated successfully. May 8 00:45:50.840335 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:45:50.840833 systemd-logind[1304]: Removed session 14. May 8 00:45:50.876285 sshd[3671]: Accepted publickey for core from 10.0.0.1 port 49910 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:50.877611 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:50.881922 systemd-logind[1304]: New session 15 of user core. May 8 00:45:50.882138 systemd[1]: Started session-15.scope. May 8 00:45:52.209170 sshd[3671]: pam_unix(sshd:session): session closed for user core May 8 00:45:52.211996 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:49924.service. May 8 00:45:52.214031 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:49910.service: Deactivated successfully. May 8 00:45:52.217215 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:45:52.217566 systemd-logind[1304]: Session 15 logged out. Waiting for processes to exit. May 8 00:45:52.219403 systemd-logind[1304]: Removed session 15. May 8 00:45:52.265526 sshd[3693]: Accepted publickey for core from 10.0.0.1 port 49924 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:52.266826 sshd[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:52.270761 systemd-logind[1304]: New session 16 of user core. May 8 00:45:52.271010 systemd[1]: Started session-16.scope. May 8 00:45:52.509853 sshd[3693]: pam_unix(sshd:session): session closed for user core May 8 00:45:52.511857 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:60990.service. May 8 00:45:52.518254 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:49924.service: Deactivated successfully. May 8 00:45:52.519189 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:45:52.519214 systemd-logind[1304]: Session 16 logged out. Waiting for processes to exit. May 8 00:45:52.520103 systemd-logind[1304]: Removed session 16. May 8 00:45:52.553898 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 60990 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:52.555158 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:52.558715 systemd-logind[1304]: New session 17 of user core. May 8 00:45:52.559434 systemd[1]: Started session-17.scope. May 8 00:45:52.666073 sshd[3707]: pam_unix(sshd:session): session closed for user core May 8 00:45:52.669933 systemd-logind[1304]: Session 17 logged out. Waiting for processes to exit. May 8 00:45:52.670068 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:60990.service: Deactivated successfully. May 8 00:45:52.670860 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:45:52.671319 systemd-logind[1304]: Removed session 17. May 8 00:45:57.669331 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:60996.service. May 8 00:45:57.709014 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 60996 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:45:57.710690 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:57.714238 systemd-logind[1304]: New session 18 of user core. May 8 00:45:57.715062 systemd[1]: Started session-18.scope. May 8 00:45:57.823738 sshd[3726]: pam_unix(sshd:session): session closed for user core May 8 00:45:57.829353 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:60996.service: Deactivated successfully. May 8 00:45:57.830351 systemd-logind[1304]: Session 18 logged out. Waiting for processes to exit. May 8 00:45:57.830427 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:45:57.831198 systemd-logind[1304]: Removed session 18. May 8 00:46:02.826549 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:38458.service. May 8 00:46:02.864718 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 38458 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:46:02.865842 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:02.870853 systemd-logind[1304]: New session 19 of user core. May 8 00:46:02.872635 systemd[1]: Started session-19.scope. May 8 00:46:02.996288 sshd[3742]: pam_unix(sshd:session): session closed for user core May 8 00:46:02.998737 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:38458.service: Deactivated successfully. May 8 00:46:02.999726 systemd-logind[1304]: Session 19 logged out. Waiting for processes to exit. May 8 00:46:02.999775 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:46:03.000931 systemd-logind[1304]: Removed session 19. May 8 00:46:07.999967 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:38474.service. May 8 00:46:08.045153 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 38474 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:46:08.046563 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:08.050999 systemd-logind[1304]: New session 20 of user core. May 8 00:46:08.051947 systemd[1]: Started session-20.scope. May 8 00:46:08.190639 sshd[3756]: pam_unix(sshd:session): session closed for user core May 8 00:46:08.194600 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:38474.service: Deactivated successfully. May 8 00:46:08.195735 systemd-logind[1304]: Session 20 logged out. Waiting for processes to exit. May 8 00:46:08.195769 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:46:08.196997 systemd-logind[1304]: Removed session 20. May 8 00:46:13.192474 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:38770.service. May 8 00:46:13.233781 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 38770 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:46:13.234997 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:13.243534 systemd-logind[1304]: New session 21 of user core. May 8 00:46:13.243882 systemd[1]: Started session-21.scope. May 8 00:46:13.360357 sshd[3773]: pam_unix(sshd:session): session closed for user core May 8 00:46:13.361157 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:38778.service. May 8 00:46:13.370323 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:38770.service: Deactivated successfully. May 8 00:46:13.371253 systemd-logind[1304]: Session 21 logged out. Waiting for processes to exit. May 8 00:46:13.371322 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:46:13.373173 systemd-logind[1304]: Removed session 21. May 8 00:46:13.407123 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 38778 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:46:13.408374 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:13.412360 systemd-logind[1304]: New session 22 of user core. May 8 00:46:13.412539 systemd[1]: Started session-22.scope. May 8 00:46:15.190058 env[1315]: time="2025-05-08T00:46:15.189982876Z" level=info msg="StopContainer for \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\" with timeout 30 (s)" May 8 00:46:15.190487 env[1315]: time="2025-05-08T00:46:15.190390727Z" level=info msg="Stop container \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\" with signal terminated" May 8 00:46:15.227642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931-rootfs.mount: Deactivated successfully. May 8 00:46:15.239234 env[1315]: time="2025-05-08T00:46:15.239188665Z" level=info msg="shim disconnected" id=cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931 May 8 00:46:15.239234 env[1315]: time="2025-05-08T00:46:15.239232586Z" level=warning msg="cleaning up after shim disconnected" id=cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931 namespace=k8s.io May 8 00:46:15.239456 env[1315]: time="2025-05-08T00:46:15.239242587Z" level=info msg="cleaning up dead shim" May 8 00:46:15.241725 env[1315]: time="2025-05-08T00:46:15.241605688Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:46:15.246968 env[1315]: time="2025-05-08T00:46:15.246934025Z" level=info msg="StopContainer for \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\" with timeout 2 (s)" May 8 00:46:15.247215 env[1315]: time="2025-05-08T00:46:15.247163551Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3835 runtime=io.containerd.runc.v2\ntime=\"2025-05-08T00:46:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 8 00:46:15.247829 env[1315]: time="2025-05-08T00:46:15.247670364Z" level=info msg="Stop container \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\" with signal terminated" May 8 00:46:15.249417 env[1315]: time="2025-05-08T00:46:15.249381848Z" level=info msg="StopContainer for \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\" returns successfully" May 8 00:46:15.249961 env[1315]: time="2025-05-08T00:46:15.249934502Z" level=info msg="StopPodSandbox for \"db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9\"" May 8 00:46:15.250011 env[1315]: time="2025-05-08T00:46:15.249993504Z" level=info msg="Container to stop \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:46:15.251938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9-shm.mount: Deactivated successfully. May 8 00:46:15.255614 systemd-networkd[1097]: lxc_health: Link DOWN May 8 00:46:15.255619 systemd-networkd[1097]: lxc_health: Lost carrier May 8 00:46:15.277890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9-rootfs.mount: Deactivated successfully. May 8 00:46:15.285515 env[1315]: time="2025-05-08T00:46:15.285235253Z" level=info msg="shim disconnected" id=db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9 May 8 00:46:15.285515 env[1315]: time="2025-05-08T00:46:15.285281454Z" level=warning msg="cleaning up after shim disconnected" id=db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9 namespace=k8s.io May 8 00:46:15.285515 env[1315]: time="2025-05-08T00:46:15.285298254Z" level=info msg="cleaning up dead shim" May 8 00:46:15.294098 env[1315]: time="2025-05-08T00:46:15.294037680Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3878 runtime=io.containerd.runc.v2\n" May 8 00:46:15.294379 env[1315]: time="2025-05-08T00:46:15.294343208Z" level=info msg="TearDown network for sandbox \"db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9\" successfully" May 8 00:46:15.294431 env[1315]: time="2025-05-08T00:46:15.294380409Z" level=info msg="StopPodSandbox for \"db2f4646b448424c845eef9d45745c863832fa9ee0c5045fc151f6a8d6d0b8d9\" returns successfully" May 8 00:46:15.309581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2-rootfs.mount: Deactivated successfully. May 8 00:46:15.312903 env[1315]: time="2025-05-08T00:46:15.312856565Z" level=info msg="shim disconnected" id=763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2 May 8 00:46:15.312903 env[1315]: time="2025-05-08T00:46:15.312900126Z" level=warning msg="cleaning up after shim disconnected" id=763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2 namespace=k8s.io May 8 00:46:15.313041 env[1315]: time="2025-05-08T00:46:15.312910606Z" level=info msg="cleaning up dead shim" May 8 00:46:15.320287 env[1315]: time="2025-05-08T00:46:15.320183434Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3904 runtime=io.containerd.runc.v2\n" May 8 00:46:15.323021 env[1315]: time="2025-05-08T00:46:15.322971746Z" level=info msg="StopContainer for \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\" returns successfully" May 8 00:46:15.323613 env[1315]: time="2025-05-08T00:46:15.323572161Z" level=info msg="StopPodSandbox for \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\"" May 8 00:46:15.323673 env[1315]: time="2025-05-08T00:46:15.323639683Z" level=info msg="Container to stop \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:46:15.323710 env[1315]: time="2025-05-08T00:46:15.323669244Z" level=info msg="Container to stop \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:46:15.323710 env[1315]: time="2025-05-08T00:46:15.323682844Z" level=info msg="Container to stop \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:46:15.323710 env[1315]: time="2025-05-08T00:46:15.323703285Z" level=info msg="Container to stop \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:46:15.323790 env[1315]: time="2025-05-08T00:46:15.323715485Z" level=info msg="Container to stop \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:46:15.349995 env[1315]: time="2025-05-08T00:46:15.349936161Z" level=info msg="shim disconnected" id=887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862 May 8 00:46:15.349995 env[1315]: time="2025-05-08T00:46:15.349977802Z" level=warning msg="cleaning up after shim disconnected" id=887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862 namespace=k8s.io May 8 00:46:15.349995 env[1315]: time="2025-05-08T00:46:15.350001963Z" level=info msg="cleaning up dead shim" May 8 00:46:15.360364 env[1315]: time="2025-05-08T00:46:15.360309869Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" May 8 00:46:15.360795 env[1315]: time="2025-05-08T00:46:15.360766121Z" level=info msg="TearDown network for sandbox \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" successfully" May 8 00:46:15.360833 env[1315]: time="2025-05-08T00:46:15.360797041Z" level=info msg="StopPodSandbox for \"887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862\" returns successfully" May 8 00:46:15.388419 kubelet[2176]: I0508 00:46:15.388367 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj8mf\" (UniqueName: \"kubernetes.io/projected/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-kube-api-access-nj8mf\") pod \"e264a0c7-8b56-4e83-9ffc-f4c0def1decb\" (UID: \"e264a0c7-8b56-4e83-9ffc-f4c0def1decb\") " May 8 00:46:15.388419 kubelet[2176]: I0508 00:46:15.388427 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-cilium-config-path\") pod \"e264a0c7-8b56-4e83-9ffc-f4c0def1decb\" (UID: \"e264a0c7-8b56-4e83-9ffc-f4c0def1decb\") " May 8 00:46:15.397121 kubelet[2176]: I0508 00:46:15.397063 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-kube-api-access-nj8mf" (OuterVolumeSpecName: "kube-api-access-nj8mf") pod "e264a0c7-8b56-4e83-9ffc-f4c0def1decb" (UID: "e264a0c7-8b56-4e83-9ffc-f4c0def1decb"). InnerVolumeSpecName "kube-api-access-nj8mf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:46:15.398439 kubelet[2176]: I0508 00:46:15.398407 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e264a0c7-8b56-4e83-9ffc-f4c0def1decb" (UID: "e264a0c7-8b56-4e83-9ffc-f4c0def1decb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:46:15.489365 kubelet[2176]: I0508 00:46:15.488621 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-config-path\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489365 kubelet[2176]: I0508 00:46:15.489332 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-lib-modules\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489530 kubelet[2176]: I0508 00:46:15.489368 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-net\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489530 kubelet[2176]: I0508 00:46:15.489398 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-cgroup\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489530 kubelet[2176]: I0508 00:46:15.489412 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-etc-cni-netd\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489530 kubelet[2176]: I0508 00:46:15.489429 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-run\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489530 kubelet[2176]: I0508 00:46:15.489445 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-kernel\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489530 kubelet[2176]: I0508 00:46:15.489462 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-bpf-maps\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489701 kubelet[2176]: I0508 00:46:15.489478 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-xtables-lock\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489701 kubelet[2176]: I0508 00:46:15.489500 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksdbf\" (UniqueName: \"kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-kube-api-access-ksdbf\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489701 kubelet[2176]: I0508 00:46:15.489518 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-clustermesh-secrets\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489701 kubelet[2176]: I0508 00:46:15.489531 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hostproc\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489701 kubelet[2176]: I0508 00:46:15.489548 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hubble-tls\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489701 kubelet[2176]: I0508 00:46:15.489562 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cni-path\") pod \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\" (UID: \"02ed2ff0-d90e-44c9-beef-cc7bfd771bed\") " May 8 00:46:15.489879 kubelet[2176]: I0508 00:46:15.489599 2176 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nj8mf\" (UniqueName: \"kubernetes.io/projected/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-kube-api-access-nj8mf\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.489879 kubelet[2176]: I0508 00:46:15.489608 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e264a0c7-8b56-4e83-9ffc-f4c0def1decb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.489879 kubelet[2176]: I0508 00:46:15.489672 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cni-path" (OuterVolumeSpecName: "cni-path") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.489879 kubelet[2176]: I0508 00:46:15.489710 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.489879 kubelet[2176]: I0508 00:46:15.489726 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490051 kubelet[2176]: I0508 00:46:15.489740 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490051 kubelet[2176]: I0508 00:46:15.489753 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490051 kubelet[2176]: I0508 00:46:15.489766 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490051 kubelet[2176]: I0508 00:46:15.489780 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490051 kubelet[2176]: I0508 00:46:15.489793 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490183 kubelet[2176]: I0508 00:46:15.489805 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490281 kubelet[2176]: I0508 00:46:15.490255 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hostproc" (OuterVolumeSpecName: "hostproc") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:15.490611 kubelet[2176]: I0508 00:46:15.490570 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:46:15.492600 kubelet[2176]: I0508 00:46:15.492562 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-kube-api-access-ksdbf" (OuterVolumeSpecName: "kube-api-access-ksdbf") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "kube-api-access-ksdbf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:46:15.493203 kubelet[2176]: I0508 00:46:15.493177 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:46:15.493385 kubelet[2176]: I0508 00:46:15.493330 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "02ed2ff0-d90e-44c9-beef-cc7bfd771bed" (UID: "02ed2ff0-d90e-44c9-beef-cc7bfd771bed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:46:15.590669 kubelet[2176]: I0508 00:46:15.590611 2176 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ksdbf\" (UniqueName: \"kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-kube-api-access-ksdbf\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.590821 kubelet[2176]: I0508 00:46:15.590808 2176 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.590923 kubelet[2176]: I0508 00:46:15.590911 2176 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.590985 kubelet[2176]: I0508 00:46:15.590974 2176 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.591902 kubelet[2176]: I0508 00:46:15.591881 2176 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592013 kubelet[2176]: I0508 00:46:15.592001 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592076 kubelet[2176]: I0508 00:46:15.592066 2176 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592133 kubelet[2176]: I0508 00:46:15.592123 2176 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592190 kubelet[2176]: I0508 00:46:15.592180 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592250 kubelet[2176]: I0508 00:46:15.592240 2176 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592319 kubelet[2176]: I0508 00:46:15.592308 2176 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592411 kubelet[2176]: I0508 00:46:15.592369 2176 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592481 kubelet[2176]: I0508 00:46:15.592470 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:46:15.592539 kubelet[2176]: I0508 00:46:15.592530 2176 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02ed2ff0-d90e-44c9-beef-cc7bfd771bed-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:46:16.004249 kubelet[2176]: I0508 00:46:16.004217 2176 scope.go:117] "RemoveContainer" containerID="763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2" May 8 00:46:16.010869 env[1315]: time="2025-05-08T00:46:16.010829401Z" level=info msg="RemoveContainer for \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\"" May 8 00:46:16.014226 env[1315]: time="2025-05-08T00:46:16.014188807Z" level=info msg="RemoveContainer for \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\" returns successfully" May 8 00:46:16.014452 kubelet[2176]: I0508 00:46:16.014430 2176 scope.go:117] "RemoveContainer" containerID="8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b" May 8 00:46:16.015683 env[1315]: time="2025-05-08T00:46:16.015635364Z" level=info msg="RemoveContainer for \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\"" May 8 00:46:16.017891 env[1315]: time="2025-05-08T00:46:16.017852300Z" level=info msg="RemoveContainer for \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\" returns successfully" May 8 00:46:16.018119 kubelet[2176]: I0508 00:46:16.018020 2176 scope.go:117] "RemoveContainer" containerID="332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03" May 8 00:46:16.019292 env[1315]: time="2025-05-08T00:46:16.019264896Z" level=info msg="RemoveContainer for \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\"" May 8 00:46:16.022645 env[1315]: time="2025-05-08T00:46:16.021332949Z" level=info msg="RemoveContainer for \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\" returns successfully" May 8 00:46:16.023132 kubelet[2176]: I0508 00:46:16.021517 2176 scope.go:117] "RemoveContainer" containerID="b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218" May 8 00:46:16.023666 env[1315]: time="2025-05-08T00:46:16.023623487Z" level=info msg="RemoveContainer for \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\"" May 8 00:46:16.026110 env[1315]: time="2025-05-08T00:46:16.026068869Z" level=info msg="RemoveContainer for \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\" returns successfully" May 8 00:46:16.027631 kubelet[2176]: I0508 00:46:16.027606 2176 scope.go:117] "RemoveContainer" containerID="e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55" May 8 00:46:16.035809 env[1315]: time="2025-05-08T00:46:16.035772997Z" level=info msg="RemoveContainer for \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\"" May 8 00:46:16.038787 env[1315]: time="2025-05-08T00:46:16.038671471Z" level=info msg="RemoveContainer for \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\" returns successfully" May 8 00:46:16.038903 kubelet[2176]: I0508 00:46:16.038830 2176 scope.go:117] "RemoveContainer" containerID="763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2" May 8 00:46:16.039347 env[1315]: time="2025-05-08T00:46:16.039163243Z" level=error msg="ContainerStatus for \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\": not found" May 8 00:46:16.040351 kubelet[2176]: E0508 00:46:16.040288 2176 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\": not found" containerID="763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2" May 8 00:46:16.040446 kubelet[2176]: I0508 00:46:16.040345 2176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2"} err="failed to get container status \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"763b92751983b379ede77c58c3d230b0cdb5c5db2464e4389058ae80ed6498b2\": not found" May 8 00:46:16.040446 kubelet[2176]: I0508 00:46:16.040440 2176 scope.go:117] "RemoveContainer" containerID="8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b" May 8 00:46:16.040709 env[1315]: time="2025-05-08T00:46:16.040638361Z" level=error msg="ContainerStatus for \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\": not found" May 8 00:46:16.040864 kubelet[2176]: E0508 00:46:16.040844 2176 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\": not found" containerID="8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b" May 8 00:46:16.040961 kubelet[2176]: I0508 00:46:16.040940 2176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b"} err="failed to get container status \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8648c1ecb95b9cc320e28d8d7a46606d56b2fa2dba622bc7e3de2be1a1a1b94b\": not found" May 8 00:46:16.041026 kubelet[2176]: I0508 00:46:16.041014 2176 scope.go:117] "RemoveContainer" containerID="332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03" May 8 00:46:16.041344 env[1315]: time="2025-05-08T00:46:16.041251376Z" level=error msg="ContainerStatus for \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\": not found" May 8 00:46:16.041453 kubelet[2176]: E0508 00:46:16.041377 2176 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\": not found" containerID="332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03" May 8 00:46:16.041453 kubelet[2176]: I0508 00:46:16.041403 2176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03"} err="failed to get container status \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\": rpc error: code = NotFound desc = an error occurred when try to find container \"332ed13ffc7527651cf7d5d6674ac0799266d1561c679b856294cdea70788b03\": not found" May 8 00:46:16.041453 kubelet[2176]: I0508 00:46:16.041415 2176 scope.go:117] "RemoveContainer" containerID="b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218" May 8 00:46:16.041744 env[1315]: time="2025-05-08T00:46:16.041687787Z" level=error msg="ContainerStatus for \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\": not found" May 8 00:46:16.041862 kubelet[2176]: E0508 00:46:16.041831 2176 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\": not found" containerID="b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218" May 8 00:46:16.041912 kubelet[2176]: I0508 00:46:16.041857 2176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218"} err="failed to get container status \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2556d354dee7ea9f81b80ee2a54d040afd2c04c5ae1001c8bacd38580d25218\": not found" May 8 00:46:16.041912 kubelet[2176]: I0508 00:46:16.041873 2176 scope.go:117] "RemoveContainer" containerID="e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55" May 8 00:46:16.042178 env[1315]: time="2025-05-08T00:46:16.042088518Z" level=error msg="ContainerStatus for \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\": not found" May 8 00:46:16.042239 kubelet[2176]: E0508 00:46:16.042204 2176 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\": not found" containerID="e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55" May 8 00:46:16.042269 kubelet[2176]: I0508 00:46:16.042228 2176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55"} err="failed to get container status \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0cb719495a80527b59438bd86d26afcfd2a2cccf7ab8ba26f547f3622cf8f55\": not found" May 8 00:46:16.042269 kubelet[2176]: I0508 00:46:16.042252 2176 scope.go:117] "RemoveContainer" containerID="cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931" May 8 00:46:16.043412 env[1315]: time="2025-05-08T00:46:16.043174465Z" level=info msg="RemoveContainer for \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\"" May 8 00:46:16.045674 env[1315]: time="2025-05-08T00:46:16.045576646Z" level=info msg="RemoveContainer for \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\" returns successfully" May 8 00:46:16.045781 kubelet[2176]: I0508 00:46:16.045755 2176 scope.go:117] "RemoveContainer" containerID="cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931" May 8 00:46:16.045966 env[1315]: time="2025-05-08T00:46:16.045916935Z" level=error msg="ContainerStatus for \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\": not found" May 8 00:46:16.046058 kubelet[2176]: E0508 00:46:16.046037 2176 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\": not found" containerID="cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931" May 8 00:46:16.046101 kubelet[2176]: I0508 00:46:16.046058 2176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931"} err="failed to get container status \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb5f16fc66fb1c1104cda1b96f775adab5b2c11c77f2c90e0527f7d9529a0931\": not found" May 8 00:46:16.202514 systemd[1]: var-lib-kubelet-pods-e264a0c7\x2d8b56\x2d4e83\x2d9ffc\x2df4c0def1decb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnj8mf.mount: Deactivated successfully. May 8 00:46:16.202688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862-rootfs.mount: Deactivated successfully. May 8 00:46:16.202783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-887bdd2869a2c686854ab94a28f92bbf602f055a5c91728e3d0057cf59a0c862-shm.mount: Deactivated successfully. May 8 00:46:16.202867 systemd[1]: var-lib-kubelet-pods-02ed2ff0\x2dd90e\x2d44c9\x2dbeef\x2dcc7bfd771bed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksdbf.mount: Deactivated successfully. May 8 00:46:16.202950 systemd[1]: var-lib-kubelet-pods-02ed2ff0\x2dd90e\x2d44c9\x2dbeef\x2dcc7bfd771bed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:46:16.203038 systemd[1]: var-lib-kubelet-pods-02ed2ff0\x2dd90e\x2d44c9\x2dbeef\x2dcc7bfd771bed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:46:17.156007 sshd[3786]: pam_unix(sshd:session): session closed for user core May 8 00:46:17.157539 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:38784.service. May 8 00:46:17.158767 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:38778.service: Deactivated successfully. May 8 00:46:17.159877 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:46:17.159906 systemd-logind[1304]: Session 22 logged out. Waiting for processes to exit. May 8 00:46:17.161378 systemd-logind[1304]: Removed session 22. May 8 00:46:17.196528 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 38784 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:46:17.197961 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:17.201985 systemd-logind[1304]: New session 23 of user core. May 8 00:46:17.202439 systemd[1]: Started session-23.scope. May 8 00:46:17.816368 kubelet[2176]: I0508 00:46:17.816335 2176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" path="/var/lib/kubelet/pods/02ed2ff0-d90e-44c9-beef-cc7bfd771bed/volumes" May 8 00:46:17.817366 kubelet[2176]: I0508 00:46:17.817348 2176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e264a0c7-8b56-4e83-9ffc-f4c0def1decb" path="/var/lib/kubelet/pods/e264a0c7-8b56-4e83-9ffc-f4c0def1decb/volumes" May 8 00:46:17.863158 kubelet[2176]: E0508 00:46:17.863122 2176 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:46:18.507989 sshd[3953]: pam_unix(sshd:session): session closed for user core May 8 00:46:18.511402 systemd[1]: Started sshd@23-10.0.0.90:22-10.0.0.1:38786.service. May 8 00:46:18.514171 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:38784.service: Deactivated successfully. May 8 00:46:18.516871 kubelet[2176]: I0508 00:46:18.516825 2176 topology_manager.go:215] "Topology Admit Handler" podUID="fedbbece-bf29-46fa-adad-3c4b21419eb1" podNamespace="kube-system" podName="cilium-pqpxh" May 8 00:46:18.516958 kubelet[2176]: E0508 00:46:18.516950 2176 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" containerName="clean-cilium-state" May 8 00:46:18.516985 kubelet[2176]: E0508 00:46:18.516962 2176 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e264a0c7-8b56-4e83-9ffc-f4c0def1decb" containerName="cilium-operator" May 8 00:46:18.516985 kubelet[2176]: E0508 00:46:18.516969 2176 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" containerName="mount-bpf-fs" May 8 00:46:18.516985 kubelet[2176]: E0508 00:46:18.516976 2176 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" containerName="cilium-agent" May 8 00:46:18.516985 kubelet[2176]: E0508 00:46:18.516983 2176 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" containerName="mount-cgroup" May 8 00:46:18.517103 kubelet[2176]: E0508 00:46:18.516989 2176 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" containerName="apply-sysctl-overwrites" May 8 00:46:18.517103 kubelet[2176]: I0508 00:46:18.517009 2176 memory_manager.go:354] "RemoveStaleState removing state" podUID="02ed2ff0-d90e-44c9-beef-cc7bfd771bed" containerName="cilium-agent" May 8 00:46:18.517103 kubelet[2176]: I0508 00:46:18.517015 2176 memory_manager.go:354] "RemoveStaleState removing state" podUID="e264a0c7-8b56-4e83-9ffc-f4c0def1decb" containerName="cilium-operator" May 8 00:46:18.518082 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:46:18.518602 systemd-logind[1304]: Session 23 logged out. Waiting for processes to exit. May 8 00:46:18.535879 systemd-logind[1304]: Removed session 23. May 8 00:46:18.560578 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 38786 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:46:18.562134 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:18.568639 systemd-logind[1304]: New session 24 of user core. May 8 00:46:18.568986 systemd[1]: Started session-24.scope. May 8 00:46:18.605979 kubelet[2176]: I0508 00:46:18.605918 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-bpf-maps\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.605979 kubelet[2176]: I0508 00:46:18.605972 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-etc-cni-netd\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606164 kubelet[2176]: I0508 00:46:18.605993 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-net\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606164 kubelet[2176]: I0508 00:46:18.606010 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-kernel\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606164 kubelet[2176]: I0508 00:46:18.606027 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-hostproc\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606164 kubelet[2176]: I0508 00:46:18.606043 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dcz\" (UniqueName: \"kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-kube-api-access-p7dcz\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606164 kubelet[2176]: I0508 00:46:18.606066 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-clustermesh-secrets\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606283 kubelet[2176]: I0508 00:46:18.606083 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-config-path\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606283 kubelet[2176]: I0508 00:46:18.606098 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-lib-modules\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606283 kubelet[2176]: I0508 00:46:18.606114 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-run\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606283 kubelet[2176]: I0508 00:46:18.606129 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-cgroup\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606283 kubelet[2176]: I0508 00:46:18.606147 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-ipsec-secrets\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606283 kubelet[2176]: I0508 00:46:18.606167 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-hubble-tls\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606489 kubelet[2176]: I0508 00:46:18.606182 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cni-path\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.606489 kubelet[2176]: I0508 00:46:18.606197 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-xtables-lock\") pod \"cilium-pqpxh\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " pod="kube-system/cilium-pqpxh" May 8 00:46:18.695539 sshd[3966]: pam_unix(sshd:session): session closed for user core May 8 00:46:18.697937 systemd[1]: Started sshd@24-10.0.0.90:22-10.0.0.1:38790.service. May 8 00:46:18.700251 systemd[1]: sshd@23-10.0.0.90:22-10.0.0.1:38786.service: Deactivated successfully. May 8 00:46:18.701181 systemd-logind[1304]: Session 24 logged out. Waiting for processes to exit. May 8 00:46:18.701304 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:46:18.707640 systemd-logind[1304]: Removed session 24. May 8 00:46:18.719593 kubelet[2176]: E0508 00:46:18.719538 2176 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-cgroup cilium-config-path cilium-ipsec-secrets clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-p7dcz lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-pqpxh" podUID="fedbbece-bf29-46fa-adad-3c4b21419eb1" May 8 00:46:18.745140 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 38790 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:46:18.746566 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:18.750919 systemd-logind[1304]: New session 25 of user core. May 8 00:46:18.751107 systemd[1]: Started session-25.scope. May 8 00:46:19.109298 kubelet[2176]: I0508 00:46:19.109257 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-xtables-lock\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109298 kubelet[2176]: I0508 00:46:19.109305 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-kernel\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109737 kubelet[2176]: I0508 00:46:19.109332 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-config-path\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109737 kubelet[2176]: I0508 00:46:19.109349 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-lib-modules\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109737 kubelet[2176]: I0508 00:46:19.109370 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-ipsec-secrets\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109737 kubelet[2176]: I0508 00:46:19.109391 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-bpf-maps\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109737 kubelet[2176]: I0508 00:46:19.109409 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cni-path\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109737 kubelet[2176]: I0508 00:46:19.109427 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7dcz\" (UniqueName: \"kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-kube-api-access-p7dcz\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109891 kubelet[2176]: I0508 00:46:19.109443 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-run\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109891 kubelet[2176]: I0508 00:46:19.109471 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-hubble-tls\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109891 kubelet[2176]: I0508 00:46:19.109486 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-hostproc\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109891 kubelet[2176]: I0508 00:46:19.109517 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-cgroup\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109891 kubelet[2176]: I0508 00:46:19.109542 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-clustermesh-secrets\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.109891 kubelet[2176]: I0508 00:46:19.109556 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-etc-cni-netd\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.110039 kubelet[2176]: I0508 00:46:19.109570 2176 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-net\") pod \"fedbbece-bf29-46fa-adad-3c4b21419eb1\" (UID: \"fedbbece-bf29-46fa-adad-3c4b21419eb1\") " May 8 00:46:19.110039 kubelet[2176]: I0508 00:46:19.109640 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.110039 kubelet[2176]: I0508 00:46:19.109696 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.110039 kubelet[2176]: I0508 00:46:19.109712 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.110784 kubelet[2176]: I0508 00:46:19.110749 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.110846 kubelet[2176]: I0508 00:46:19.110761 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.110846 kubelet[2176]: I0508 00:46:19.110806 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cni-path" (OuterVolumeSpecName: "cni-path") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.110934 kubelet[2176]: I0508 00:46:19.110913 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.111248 kubelet[2176]: I0508 00:46:19.111225 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.111435 kubelet[2176]: I0508 00:46:19.111406 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:46:19.111566 kubelet[2176]: I0508 00:46:19.111550 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.111668 kubelet[2176]: I0508 00:46:19.111643 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-hostproc" (OuterVolumeSpecName: "hostproc") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:46:19.114090 kubelet[2176]: I0508 00:46:19.114057 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:46:19.114172 kubelet[2176]: I0508 00:46:19.114149 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:46:19.115143 systemd[1]: var-lib-kubelet-pods-fedbbece\x2dbf29\x2d46fa\x2dadad\x2d3c4b21419eb1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7dcz.mount: Deactivated successfully. May 8 00:46:19.115259 kubelet[2176]: I0508 00:46:19.115157 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-kube-api-access-p7dcz" (OuterVolumeSpecName: "kube-api-access-p7dcz") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "kube-api-access-p7dcz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:46:19.115298 systemd[1]: var-lib-kubelet-pods-fedbbece\x2dbf29\x2d46fa\x2dadad\x2d3c4b21419eb1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 8 00:46:19.115398 systemd[1]: var-lib-kubelet-pods-fedbbece\x2dbf29\x2d46fa\x2dadad\x2d3c4b21419eb1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:46:19.115599 kubelet[2176]: I0508 00:46:19.115577 2176 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fedbbece-bf29-46fa-adad-3c4b21419eb1" (UID: "fedbbece-bf29-46fa-adad-3c4b21419eb1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:46:19.117988 systemd[1]: var-lib-kubelet-pods-fedbbece\x2dbf29\x2d46fa\x2dadad\x2d3c4b21419eb1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:46:19.209976 kubelet[2176]: I0508 00:46:19.209936 2176 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210146 kubelet[2176]: I0508 00:46:19.210134 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210209 kubelet[2176]: I0508 00:46:19.210200 2176 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210298 kubelet[2176]: I0508 00:46:19.210288 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210362 kubelet[2176]: I0508 00:46:19.210345 2176 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210421 kubelet[2176]: I0508 00:46:19.210412 2176 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210517 kubelet[2176]: I0508 00:46:19.210507 2176 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210587 kubelet[2176]: I0508 00:46:19.210577 2176 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-p7dcz\" (UniqueName: \"kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-kube-api-access-p7dcz\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210682 kubelet[2176]: I0508 00:46:19.210671 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210764 kubelet[2176]: I0508 00:46:19.210754 2176 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fedbbece-bf29-46fa-adad-3c4b21419eb1-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210831 kubelet[2176]: I0508 00:46:19.210821 2176 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fedbbece-bf29-46fa-adad-3c4b21419eb1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210893 kubelet[2176]: I0508 00:46:19.210884 2176 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.210952 kubelet[2176]: I0508 00:46:19.210937 2176 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.211007 kubelet[2176]: I0508 00:46:19.210998 2176 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.211068 kubelet[2176]: I0508 00:46:19.211054 2176 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fedbbece-bf29-46fa-adad-3c4b21419eb1-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:46:19.293551 kubelet[2176]: I0508 00:46:19.293505 2176 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:46:19Z","lastTransitionTime":"2025-05-08T00:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:46:19.814414 kubelet[2176]: E0508 00:46:19.814370 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:20.067636 kubelet[2176]: I0508 00:46:20.067508 2176 topology_manager.go:215] "Topology Admit Handler" podUID="16fde645-2252-458c-a3f6-2d682a79c732" podNamespace="kube-system" podName="cilium-zbrsl" May 8 00:46:20.217444 kubelet[2176]: I0508 00:46:20.217385 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-xtables-lock\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217855 kubelet[2176]: I0508 00:46:20.217479 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-cilium-run\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217855 kubelet[2176]: I0508 00:46:20.217502 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-hostproc\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217855 kubelet[2176]: I0508 00:46:20.217546 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-cni-path\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217855 kubelet[2176]: I0508 00:46:20.217564 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-bpf-maps\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217855 kubelet[2176]: I0508 00:46:20.217580 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-lib-modules\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217855 kubelet[2176]: I0508 00:46:20.217595 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-etc-cni-netd\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217992 kubelet[2176]: I0508 00:46:20.217639 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16fde645-2252-458c-a3f6-2d682a79c732-cilium-config-path\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217992 kubelet[2176]: I0508 00:46:20.217687 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-host-proc-sys-net\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217992 kubelet[2176]: I0508 00:46:20.217703 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16fde645-2252-458c-a3f6-2d682a79c732-hubble-tls\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217992 kubelet[2176]: I0508 00:46:20.217720 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-cilium-cgroup\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.217992 kubelet[2176]: I0508 00:46:20.217762 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16fde645-2252-458c-a3f6-2d682a79c732-clustermesh-secrets\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.218098 kubelet[2176]: I0508 00:46:20.217780 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffm5\" (UniqueName: \"kubernetes.io/projected/16fde645-2252-458c-a3f6-2d682a79c732-kube-api-access-6ffm5\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.218098 kubelet[2176]: I0508 00:46:20.217798 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/16fde645-2252-458c-a3f6-2d682a79c732-cilium-ipsec-secrets\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.218098 kubelet[2176]: I0508 00:46:20.217841 2176 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16fde645-2252-458c-a3f6-2d682a79c732-host-proc-sys-kernel\") pod \"cilium-zbrsl\" (UID: \"16fde645-2252-458c-a3f6-2d682a79c732\") " pod="kube-system/cilium-zbrsl" May 8 00:46:20.385950 kubelet[2176]: E0508 00:46:20.385626 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:20.387661 env[1315]: time="2025-05-08T00:46:20.386232909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbrsl,Uid:16fde645-2252-458c-a3f6-2d682a79c732,Namespace:kube-system,Attempt:0,}" May 8 00:46:20.414078 env[1315]: time="2025-05-08T00:46:20.414009625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:20.414194 env[1315]: time="2025-05-08T00:46:20.414084547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:20.414194 env[1315]: time="2025-05-08T00:46:20.414117308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:20.414333 env[1315]: time="2025-05-08T00:46:20.414278632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb pid=4016 runtime=io.containerd.runc.v2 May 8 00:46:20.461814 env[1315]: time="2025-05-08T00:46:20.461775467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbrsl,Uid:16fde645-2252-458c-a3f6-2d682a79c732,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\"" May 8 00:46:20.462671 kubelet[2176]: E0508 00:46:20.462511 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:20.468989 env[1315]: time="2025-05-08T00:46:20.468943362Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:46:20.478107 env[1315]: time="2025-05-08T00:46:20.478063624Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ab1335ab7ad99d7ffc7bcf9de13529761d13f152df062ab808d36df6111a8e46\"" May 8 00:46:20.478981 env[1315]: time="2025-05-08T00:46:20.478956645Z" level=info msg="StartContainer for \"ab1335ab7ad99d7ffc7bcf9de13529761d13f152df062ab808d36df6111a8e46\"" May 8 00:46:20.522495 env[1315]: time="2025-05-08T00:46:20.522436623Z" level=info msg="StartContainer for \"ab1335ab7ad99d7ffc7bcf9de13529761d13f152df062ab808d36df6111a8e46\" returns successfully" May 8 00:46:20.557093 env[1315]: time="2025-05-08T00:46:20.557038545Z" level=info msg="shim disconnected" id=ab1335ab7ad99d7ffc7bcf9de13529761d13f152df062ab808d36df6111a8e46 May 8 00:46:20.557093 env[1315]: time="2025-05-08T00:46:20.557085106Z" level=warning msg="cleaning up after shim disconnected" id=ab1335ab7ad99d7ffc7bcf9de13529761d13f152df062ab808d36df6111a8e46 namespace=k8s.io May 8 00:46:20.557093 env[1315]: time="2025-05-08T00:46:20.557094147Z" level=info msg="cleaning up dead shim" May 8 00:46:20.563869 env[1315]: time="2025-05-08T00:46:20.563829710Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4098 runtime=io.containerd.runc.v2\n" May 8 00:46:20.815163 kubelet[2176]: E0508 00:46:20.814453 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:21.024223 kubelet[2176]: E0508 00:46:21.024192 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:21.026049 env[1315]: time="2025-05-08T00:46:21.025986669Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:46:21.038677 env[1315]: time="2025-05-08T00:46:21.037205619Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56899cc465869daa00b361585f86e634e745c5808815fb537e79c8bf286d9d18\"" May 8 00:46:21.038677 env[1315]: time="2025-05-08T00:46:21.037769912Z" level=info msg="StartContainer for \"56899cc465869daa00b361585f86e634e745c5808815fb537e79c8bf286d9d18\"" May 8 00:46:21.091105 env[1315]: time="2025-05-08T00:46:21.090811229Z" level=info msg="StartContainer for \"56899cc465869daa00b361585f86e634e745c5808815fb537e79c8bf286d9d18\" returns successfully" May 8 00:46:21.108297 env[1315]: time="2025-05-08T00:46:21.108254769Z" level=info msg="shim disconnected" id=56899cc465869daa00b361585f86e634e745c5808815fb537e79c8bf286d9d18 May 8 00:46:21.108512 env[1315]: time="2025-05-08T00:46:21.108482334Z" level=warning msg="cleaning up after shim disconnected" id=56899cc465869daa00b361585f86e634e745c5808815fb537e79c8bf286d9d18 namespace=k8s.io May 8 00:46:21.108581 env[1315]: time="2025-05-08T00:46:21.108566496Z" level=info msg="cleaning up dead shim" May 8 00:46:21.114993 env[1315]: time="2025-05-08T00:46:21.114954450Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4160 runtime=io.containerd.runc.v2\n" May 8 00:46:21.815873 kubelet[2176]: I0508 00:46:21.815833 2176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fedbbece-bf29-46fa-adad-3c4b21419eb1" path="/var/lib/kubelet/pods/fedbbece-bf29-46fa-adad-3c4b21419eb1/volumes" May 8 00:46:22.028111 kubelet[2176]: E0508 00:46:22.027949 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:22.031032 env[1315]: time="2025-05-08T00:46:22.030940567Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:46:22.041984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261772184.mount: Deactivated successfully. May 8 00:46:22.044286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298362218.mount: Deactivated successfully. May 8 00:46:22.050999 env[1315]: time="2025-05-08T00:46:22.050960003Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97a7915dafdee2a00f0f26cf7d0d105afaf7ed225595ec64a30da93586bea92e\"" May 8 00:46:22.051891 env[1315]: time="2025-05-08T00:46:22.051853185Z" level=info msg="StartContainer for \"97a7915dafdee2a00f0f26cf7d0d105afaf7ed225595ec64a30da93586bea92e\"" May 8 00:46:22.113850 env[1315]: time="2025-05-08T00:46:22.113810820Z" level=info msg="StartContainer for \"97a7915dafdee2a00f0f26cf7d0d105afaf7ed225595ec64a30da93586bea92e\" returns successfully" May 8 00:46:22.131224 env[1315]: time="2025-05-08T00:46:22.131180873Z" level=info msg="shim disconnected" id=97a7915dafdee2a00f0f26cf7d0d105afaf7ed225595ec64a30da93586bea92e May 8 00:46:22.131224 env[1315]: time="2025-05-08T00:46:22.131223034Z" level=warning msg="cleaning up after shim disconnected" id=97a7915dafdee2a00f0f26cf7d0d105afaf7ed225595ec64a30da93586bea92e namespace=k8s.io May 8 00:46:22.131462 env[1315]: time="2025-05-08T00:46:22.131232155Z" level=info msg="cleaning up dead shim" May 8 00:46:22.137967 env[1315]: time="2025-05-08T00:46:22.137930234Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4219 runtime=io.containerd.runc.v2\n" May 8 00:46:22.323459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a7915dafdee2a00f0f26cf7d0d105afaf7ed225595ec64a30da93586bea92e-rootfs.mount: Deactivated successfully. May 8 00:46:22.863972 kubelet[2176]: E0508 00:46:22.863935 2176 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:46:23.031186 kubelet[2176]: E0508 00:46:23.031123 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:23.033977 env[1315]: time="2025-05-08T00:46:23.033934280Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:46:23.047817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount563719510.mount: Deactivated successfully. May 8 00:46:23.049615 env[1315]: time="2025-05-08T00:46:23.049567128Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"25fd8000301b61dc7dcce6a2d81b1ca2d6b52d88c4ef9435c9c30b5227c1c31d\"" May 8 00:46:23.050354 env[1315]: time="2025-05-08T00:46:23.050321106Z" level=info msg="StartContainer for \"25fd8000301b61dc7dcce6a2d81b1ca2d6b52d88c4ef9435c9c30b5227c1c31d\"" May 8 00:46:23.103326 env[1315]: time="2025-05-08T00:46:23.103284314Z" level=info msg="StartContainer for \"25fd8000301b61dc7dcce6a2d81b1ca2d6b52d88c4ef9435c9c30b5227c1c31d\" returns successfully" May 8 00:46:23.123665 env[1315]: time="2025-05-08T00:46:23.123341466Z" level=info msg="shim disconnected" id=25fd8000301b61dc7dcce6a2d81b1ca2d6b52d88c4ef9435c9c30b5227c1c31d May 8 00:46:23.123665 env[1315]: time="2025-05-08T00:46:23.123391708Z" level=warning msg="cleaning up after shim disconnected" id=25fd8000301b61dc7dcce6a2d81b1ca2d6b52d88c4ef9435c9c30b5227c1c31d namespace=k8s.io May 8 00:46:23.123665 env[1315]: time="2025-05-08T00:46:23.123402868Z" level=info msg="cleaning up dead shim" May 8 00:46:23.130487 env[1315]: time="2025-05-08T00:46:23.130448474Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:46:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4275 runtime=io.containerd.runc.v2\n" May 8 00:46:23.323523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25fd8000301b61dc7dcce6a2d81b1ca2d6b52d88c4ef9435c9c30b5227c1c31d-rootfs.mount: Deactivated successfully. May 8 00:46:24.035249 kubelet[2176]: E0508 00:46:24.035209 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:24.038805 env[1315]: time="2025-05-08T00:46:24.038743706Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:46:24.048904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151236963.mount: Deactivated successfully. May 8 00:46:24.053747 env[1315]: time="2025-05-08T00:46:24.053706295Z" level=info msg="CreateContainer within sandbox \"f5cb3ba538543c6a7f67a433705d8851153a4fb41c36051e9cacbc16c7eb99cb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e882ee26e20b7d81a3d28179dbba37b4bc741a30358b09c3f1a671e9b4eee437\"" May 8 00:46:24.055285 env[1315]: time="2025-05-08T00:46:24.054833521Z" level=info msg="StartContainer for \"e882ee26e20b7d81a3d28179dbba37b4bc741a30358b09c3f1a671e9b4eee437\"" May 8 00:46:24.109145 env[1315]: time="2025-05-08T00:46:24.109093546Z" level=info msg="StartContainer for \"e882ee26e20b7d81a3d28179dbba37b4bc741a30358b09c3f1a671e9b4eee437\" returns successfully" May 8 00:46:24.359680 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 8 00:46:25.039834 kubelet[2176]: E0508 00:46:25.039803 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:25.053685 kubelet[2176]: I0508 00:46:25.053621 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zbrsl" podStartSLOduration=5.053608001 podStartE2EDuration="5.053608001s" podCreationTimestamp="2025-05-08 00:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:46:25.052999867 +0000 UTC m=+87.326494291" watchObservedRunningTime="2025-05-08 00:46:25.053608001 +0000 UTC m=+87.327102505" May 8 00:46:26.387604 kubelet[2176]: E0508 00:46:26.387566 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:27.144475 systemd-networkd[1097]: lxc_health: Link UP May 8 00:46:27.156720 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:46:27.158634 systemd-networkd[1097]: lxc_health: Gained carrier May 8 00:46:28.387903 kubelet[2176]: E0508 00:46:28.387860 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:29.000783 systemd-networkd[1097]: lxc_health: Gained IPv6LL May 8 00:46:29.047354 kubelet[2176]: E0508 00:46:29.047318 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:30.048733 kubelet[2176]: E0508 00:46:30.048693 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:31.291769 systemd[1]: run-containerd-runc-k8s.io-e882ee26e20b7d81a3d28179dbba37b4bc741a30358b09c3f1a671e9b4eee437-runc.EE0zVb.mount: Deactivated successfully. May 8 00:46:31.339423 kubelet[2176]: E0508 00:46:31.339235 2176 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:40240->127.0.0.1:34349: read tcp 127.0.0.1:40240->127.0.0.1:34349: read: connection reset by peer May 8 00:46:31.814697 kubelet[2176]: E0508 00:46:31.814339 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:33.469813 sshd[3981]: pam_unix(sshd:session): session closed for user core May 8 00:46:33.473185 systemd[1]: sshd@24-10.0.0.90:22-10.0.0.1:38790.service: Deactivated successfully. May 8 00:46:33.474167 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:46:33.474452 systemd-logind[1304]: Session 25 logged out. Waiting for processes to exit. May 8 00:46:33.475231 systemd-logind[1304]: Removed session 25.