May 16 00:53:24.715572 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:53:24.715591 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:53:24.715599 kernel: efi: EFI v2.70 by EDK II May 16 00:53:24.715605 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:53:24.715610 kernel: random: crng init done May 16 00:53:24.715615 kernel: ACPI: Early table checksum verification disabled May 16 00:53:24.715621 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:53:24.715628 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:53:24.715634 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715639 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715644 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715650 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715655 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715660 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715668 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715674 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715680 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:53:24.715686 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:53:24.715691 kernel: NUMA: Failed to initialise from firmware May 16 00:53:24.715697 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:53:24.715703 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 16 00:53:24.715709 kernel: Zone ranges: May 16 00:53:24.715714 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:53:24.715721 kernel: DMA32 empty May 16 00:53:24.715727 kernel: Normal empty May 16 00:53:24.715732 kernel: Movable zone start for each node May 16 00:53:24.715738 kernel: Early memory node ranges May 16 00:53:24.715744 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:53:24.715749 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:53:24.715755 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:53:24.715768 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:53:24.715774 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:53:24.715779 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:53:24.715785 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:53:24.715791 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:53:24.715797 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:53:24.715803 kernel: psci: probing for conduit method from ACPI. May 16 00:53:24.715809 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:53:24.715814 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:53:24.715820 kernel: psci: Trusted OS migration not required May 16 00:53:24.715828 kernel: psci: SMC Calling Convention v1.1 May 16 00:53:24.715834 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:53:24.715841 kernel: ACPI: SRAT not present May 16 00:53:24.715848 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:53:24.715854 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:53:24.715860 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:53:24.715866 kernel: Detected PIPT I-cache on CPU0 May 16 00:53:24.715872 kernel: CPU features: detected: GIC system register CPU interface May 16 00:53:24.715878 kernel: CPU features: detected: Hardware dirty bit management May 16 00:53:24.715884 kernel: CPU features: detected: Spectre-v4 May 16 00:53:24.715890 kernel: CPU features: detected: Spectre-BHB May 16 00:53:24.715897 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:53:24.715903 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:53:24.715909 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:53:24.715915 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:53:24.715921 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:53:24.715927 kernel: Policy zone: DMA May 16 00:53:24.715935 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:53:24.715941 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:53:24.715947 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:53:24.715953 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:53:24.715959 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:53:24.715967 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 16 00:53:24.715973 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:53:24.715979 kernel: trace event string verifier disabled May 16 00:53:24.715985 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:53:24.715992 kernel: rcu: RCU event tracing is enabled. May 16 00:53:24.715998 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:53:24.716004 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:53:24.716010 kernel: Tracing variant of Tasks RCU enabled. May 16 00:53:24.716016 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:53:24.716022 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:53:24.716028 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:53:24.716035 kernel: GICv3: 256 SPIs implemented May 16 00:53:24.716041 kernel: GICv3: 0 Extended SPIs implemented May 16 00:53:24.716047 kernel: GICv3: Distributor has no Range Selector support May 16 00:53:24.716053 kernel: Root IRQ handler: gic_handle_irq May 16 00:53:24.716059 kernel: GICv3: 16 PPIs implemented May 16 00:53:24.716065 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:53:24.716071 kernel: ACPI: SRAT not present May 16 00:53:24.716077 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:53:24.716083 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:53:24.716089 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:53:24.716095 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:53:24.716101 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:53:24.716109 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:53:24.716115 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:53:24.716121 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:53:24.716127 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:53:24.716133 kernel: arm-pv: using stolen time PV May 16 00:53:24.716140 kernel: Console: colour dummy device 80x25 May 16 00:53:24.716146 kernel: ACPI: Core revision 20210730 May 16 00:53:24.716153 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:53:24.716159 kernel: pid_max: default: 32768 minimum: 301 May 16 00:53:24.716165 kernel: LSM: Security Framework initializing May 16 00:53:24.716172 kernel: SELinux: Initializing. May 16 00:53:24.716178 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:53:24.716185 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:53:24.716191 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:53:24.716197 kernel: rcu: Hierarchical SRCU implementation. May 16 00:53:24.716203 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:53:24.716209 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:53:24.716216 kernel: Remapping and enabling EFI services. May 16 00:53:24.716222 kernel: smp: Bringing up secondary CPUs ... May 16 00:53:24.716229 kernel: Detected PIPT I-cache on CPU1 May 16 00:53:24.716235 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:53:24.716242 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:53:24.716248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:53:24.716254 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:53:24.716260 kernel: Detected PIPT I-cache on CPU2 May 16 00:53:24.716267 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:53:24.716273 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:53:24.716279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:53:24.716285 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:53:24.716292 kernel: Detected PIPT I-cache on CPU3 May 16 00:53:24.716299 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:53:24.716305 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:53:24.716312 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:53:24.716321 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:53:24.716329 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:53:24.716336 kernel: SMP: Total of 4 processors activated. May 16 00:53:24.716342 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:53:24.716349 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:53:24.716355 kernel: CPU features: detected: Common not Private translations May 16 00:53:24.716362 kernel: CPU features: detected: CRC32 instructions May 16 00:53:24.716368 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:53:24.716375 kernel: CPU features: detected: LSE atomic instructions May 16 00:53:24.716382 kernel: CPU features: detected: Privileged Access Never May 16 00:53:24.716389 kernel: CPU features: detected: RAS Extension Support May 16 00:53:24.716395 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:53:24.716402 kernel: CPU: All CPU(s) started at EL1 May 16 00:53:24.716409 kernel: alternatives: patching kernel code May 16 00:53:24.716416 kernel: devtmpfs: initialized May 16 00:53:24.716422 kernel: KASLR enabled May 16 00:53:24.716429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:53:24.716435 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:53:24.716450 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:53:24.716471 kernel: SMBIOS 3.0.0 present. May 16 00:53:24.716478 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:53:24.716484 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:53:24.716493 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:53:24.716500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:53:24.716506 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:53:24.716513 kernel: audit: initializing netlink subsys (disabled) May 16 00:53:24.716520 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 16 00:53:24.716526 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:53:24.716532 kernel: cpuidle: using governor menu May 16 00:53:24.716539 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:53:24.716546 kernel: ASID allocator initialised with 32768 entries May 16 00:53:24.716554 kernel: ACPI: bus type PCI registered May 16 00:53:24.716560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:53:24.716567 kernel: Serial: AMBA PL011 UART driver May 16 00:53:24.716574 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:53:24.716580 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:53:24.716587 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:53:24.716593 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:53:24.716600 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:53:24.716606 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:53:24.716614 kernel: ACPI: Added _OSI(Module Device) May 16 00:53:24.716621 kernel: ACPI: Added _OSI(Processor Device) May 16 00:53:24.716627 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:53:24.716634 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:53:24.716640 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:53:24.716647 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:53:24.716653 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:53:24.716660 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:53:24.716666 kernel: ACPI: Interpreter enabled May 16 00:53:24.716674 kernel: ACPI: Using GIC for interrupt routing May 16 00:53:24.716681 kernel: ACPI: MCFG table detected, 1 entries May 16 00:53:24.716687 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:53:24.716694 kernel: printk: console [ttyAMA0] enabled May 16 00:53:24.716700 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:53:24.716834 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:53:24.716899 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:53:24.716967 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:53:24.717025 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:53:24.717082 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:53:24.717091 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:53:24.717097 kernel: PCI host bridge to bus 0000:00 May 16 00:53:24.717168 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:53:24.717221 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:53:24.717273 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:53:24.717325 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:53:24.717396 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:53:24.717483 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:53:24.717545 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:53:24.717605 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:53:24.717664 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:53:24.717724 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:53:24.717792 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:53:24.717853 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:53:24.717906 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:53:24.717956 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:53:24.718008 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:53:24.718017 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:53:24.718023 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:53:24.718032 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:53:24.718039 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:53:24.718045 kernel: iommu: Default domain type: Translated May 16 00:53:24.718052 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:53:24.718059 kernel: vgaarb: loaded May 16 00:53:24.718065 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:53:24.718072 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:53:24.718078 kernel: PTP clock support registered May 16 00:53:24.718085 kernel: Registered efivars operations May 16 00:53:24.718093 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:53:24.718099 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:53:24.718106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:53:24.718112 kernel: pnp: PnP ACPI init May 16 00:53:24.718175 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:53:24.718184 kernel: pnp: PnP ACPI: found 1 devices May 16 00:53:24.718191 kernel: NET: Registered PF_INET protocol family May 16 00:53:24.718197 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:53:24.718206 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:53:24.718213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:53:24.718219 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:53:24.718226 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:53:24.718232 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:53:24.718239 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:53:24.718246 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:53:24.718252 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:53:24.718259 kernel: PCI: CLS 0 bytes, default 64 May 16 00:53:24.718266 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:53:24.718273 kernel: kvm [1]: HYP mode not available May 16 00:53:24.718279 kernel: Initialise system trusted keyrings May 16 00:53:24.718286 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:53:24.718292 kernel: Key type asymmetric registered May 16 00:53:24.718299 kernel: Asymmetric key parser 'x509' registered May 16 00:53:24.718305 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:53:24.718312 kernel: io scheduler mq-deadline registered May 16 00:53:24.718318 kernel: io scheduler kyber registered May 16 00:53:24.718326 kernel: io scheduler bfq registered May 16 00:53:24.718332 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:53:24.718339 kernel: ACPI: button: Power Button [PWRB] May 16 00:53:24.718345 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:53:24.718402 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:53:24.718411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:53:24.718418 kernel: thunder_xcv, ver 1.0 May 16 00:53:24.718424 kernel: thunder_bgx, ver 1.0 May 16 00:53:24.718430 kernel: nicpf, ver 1.0 May 16 00:53:24.718438 kernel: nicvf, ver 1.0 May 16 00:53:24.718554 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:53:24.718616 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:53:24 UTC (1747356804) May 16 00:53:24.718625 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:53:24.718632 kernel: NET: Registered PF_INET6 protocol family May 16 00:53:24.718638 kernel: Segment Routing with IPv6 May 16 00:53:24.718645 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:53:24.718651 kernel: NET: Registered PF_PACKET protocol family May 16 00:53:24.718660 kernel: Key type dns_resolver registered May 16 00:53:24.718667 kernel: registered taskstats version 1 May 16 00:53:24.718674 kernel: Loading compiled-in X.509 certificates May 16 00:53:24.718680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:53:24.718687 kernel: Key type .fscrypt registered May 16 00:53:24.718694 kernel: Key type fscrypt-provisioning registered May 16 00:53:24.718700 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:53:24.718707 kernel: ima: Allocated hash algorithm: sha1 May 16 00:53:24.718713 kernel: ima: No architecture policies found May 16 00:53:24.718721 kernel: clk: Disabling unused clocks May 16 00:53:24.718727 kernel: Freeing unused kernel memory: 36480K May 16 00:53:24.718734 kernel: Run /init as init process May 16 00:53:24.718740 kernel: with arguments: May 16 00:53:24.718746 kernel: /init May 16 00:53:24.718753 kernel: with environment: May 16 00:53:24.718764 kernel: HOME=/ May 16 00:53:24.718772 kernel: TERM=linux May 16 00:53:24.718779 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:53:24.718789 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:53:24.718798 systemd[1]: Detected virtualization kvm. May 16 00:53:24.718805 systemd[1]: Detected architecture arm64. May 16 00:53:24.718811 systemd[1]: Running in initrd. May 16 00:53:24.718818 systemd[1]: No hostname configured, using default hostname. May 16 00:53:24.718825 systemd[1]: Hostname set to . May 16 00:53:24.718832 systemd[1]: Initializing machine ID from VM UUID. May 16 00:53:24.718840 systemd[1]: Queued start job for default target initrd.target. May 16 00:53:24.718847 systemd[1]: Started systemd-ask-password-console.path. May 16 00:53:24.718854 systemd[1]: Reached target cryptsetup.target. May 16 00:53:24.718861 systemd[1]: Reached target paths.target. May 16 00:53:24.718868 systemd[1]: Reached target slices.target. May 16 00:53:24.718875 systemd[1]: Reached target swap.target. May 16 00:53:24.718881 systemd[1]: Reached target timers.target. May 16 00:53:24.718889 systemd[1]: Listening on iscsid.socket. May 16 00:53:24.718897 systemd[1]: Listening on iscsiuio.socket. May 16 00:53:24.718904 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:53:24.718911 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:53:24.718917 systemd[1]: Listening on systemd-journald.socket. May 16 00:53:24.718924 systemd[1]: Listening on systemd-networkd.socket. May 16 00:53:24.718931 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:53:24.718938 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:53:24.718945 systemd[1]: Reached target sockets.target. May 16 00:53:24.718953 systemd[1]: Starting kmod-static-nodes.service... May 16 00:53:24.718960 systemd[1]: Finished network-cleanup.service. May 16 00:53:24.718966 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:53:24.718973 systemd[1]: Starting systemd-journald.service... May 16 00:53:24.718980 systemd[1]: Starting systemd-modules-load.service... May 16 00:53:24.718987 systemd[1]: Starting systemd-resolved.service... May 16 00:53:24.718994 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:53:24.719001 systemd[1]: Finished kmod-static-nodes.service. May 16 00:53:24.719008 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:53:24.719016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:53:24.719023 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:53:24.719030 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:53:24.719037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:53:24.719046 systemd-journald[290]: Journal started May 16 00:53:24.719087 systemd-journald[290]: Runtime Journal (/run/log/journal/a8122615a4084c17b713fa9e3d7fb6cd) is 6.0M, max 48.7M, 42.6M free. May 16 00:53:24.708871 systemd-modules-load[291]: Inserted module 'overlay' May 16 00:53:24.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.723500 kernel: audit: type=1130 audit(1747356804.720:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.723524 systemd[1]: Started systemd-journald.service. May 16 00:53:24.724714 kernel: audit: type=1130 audit(1747356804.724:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.737351 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:53:24.737531 systemd-resolved[292]: Positive Trust Anchors: May 16 00:53:24.737538 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:53:24.737565 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:53:24.748413 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:53:24.748431 kernel: audit: type=1130 audit(1747356804.737:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.738792 systemd[1]: Starting dracut-cmdline.service... May 16 00:53:24.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.742182 systemd-resolved[292]: Defaulting to hostname 'linux'. May 16 00:53:24.753473 kernel: audit: type=1130 audit(1747356804.748:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.753490 kernel: Bridge firewalling registered May 16 00:53:24.748359 systemd[1]: Started systemd-resolved.service. May 16 00:53:24.749148 systemd[1]: Reached target nss-lookup.target. May 16 00:53:24.752303 systemd-modules-load[291]: Inserted module 'br_netfilter' May 16 00:53:24.759330 dracut-cmdline[308]: dracut-dracut-053 May 16 00:53:24.761533 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:53:24.766732 kernel: SCSI subsystem initialized May 16 00:53:24.773736 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:53:24.773780 kernel: device-mapper: uevent: version 1.0.3 May 16 00:53:24.775153 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:53:24.777036 systemd-modules-load[291]: Inserted module 'dm_multipath' May 16 00:53:24.777872 systemd[1]: Finished systemd-modules-load.service. May 16 00:53:24.782052 kernel: audit: type=1130 audit(1747356804.778:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.779233 systemd[1]: Starting systemd-sysctl.service... May 16 00:53:24.786850 systemd[1]: Finished systemd-sysctl.service. May 16 00:53:24.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.790463 kernel: audit: type=1130 audit(1747356804.787:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.823463 kernel: Loading iSCSI transport class v2.0-870. May 16 00:53:24.835461 kernel: iscsi: registered transport (tcp) May 16 00:53:24.852465 kernel: iscsi: registered transport (qla4xxx) May 16 00:53:24.852486 kernel: QLogic iSCSI HBA Driver May 16 00:53:24.886487 systemd[1]: Finished dracut-cmdline.service. May 16 00:53:24.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.887937 systemd[1]: Starting dracut-pre-udev.service... May 16 00:53:24.890985 kernel: audit: type=1130 audit(1747356804.886:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:24.933524 kernel: raid6: neonx8 gen() 13729 MB/s May 16 00:53:24.950481 kernel: raid6: neonx8 xor() 10758 MB/s May 16 00:53:24.967482 kernel: raid6: neonx4 gen() 13510 MB/s May 16 00:53:24.984472 kernel: raid6: neonx4 xor() 11184 MB/s May 16 00:53:25.001469 kernel: raid6: neonx2 gen() 12944 MB/s May 16 00:53:25.018468 kernel: raid6: neonx2 xor() 10394 MB/s May 16 00:53:25.035468 kernel: raid6: neonx1 gen() 10582 MB/s May 16 00:53:25.052466 kernel: raid6: neonx1 xor() 8775 MB/s May 16 00:53:25.069465 kernel: raid6: int64x8 gen() 6266 MB/s May 16 00:53:25.086468 kernel: raid6: int64x8 xor() 3528 MB/s May 16 00:53:25.103475 kernel: raid6: int64x4 gen() 7208 MB/s May 16 00:53:25.120475 kernel: raid6: int64x4 xor() 3847 MB/s May 16 00:53:25.137480 kernel: raid6: int64x2 gen() 6142 MB/s May 16 00:53:25.154477 kernel: raid6: int64x2 xor() 3320 MB/s May 16 00:53:25.171467 kernel: raid6: int64x1 gen() 5040 MB/s May 16 00:53:25.188570 kernel: raid6: int64x1 xor() 2644 MB/s May 16 00:53:25.188583 kernel: raid6: using algorithm neonx8 gen() 13729 MB/s May 16 00:53:25.188591 kernel: raid6: .... xor() 10758 MB/s, rmw enabled May 16 00:53:25.189659 kernel: raid6: using neon recovery algorithm May 16 00:53:25.200858 kernel: xor: measuring software checksum speed May 16 00:53:25.200887 kernel: 8regs : 17173 MB/sec May 16 00:53:25.200904 kernel: 32regs : 20712 MB/sec May 16 00:53:25.202096 kernel: arm64_neon : 27570 MB/sec May 16 00:53:25.202106 kernel: xor: using function: arm64_neon (27570 MB/sec) May 16 00:53:25.254465 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:53:25.264786 systemd[1]: Finished dracut-pre-udev.service. May 16 00:53:25.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:25.267000 audit: BPF prog-id=7 op=LOAD May 16 00:53:25.269206 kernel: audit: type=1130 audit(1747356805.264:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:25.269228 kernel: audit: type=1334 audit(1747356805.267:10): prog-id=7 op=LOAD May 16 00:53:25.268000 audit: BPF prog-id=8 op=LOAD May 16 00:53:25.269613 systemd[1]: Starting systemd-udevd.service... May 16 00:53:25.282967 systemd-udevd[493]: Using default interface naming scheme 'v252'. May 16 00:53:25.286267 systemd[1]: Started systemd-udevd.service. May 16 00:53:25.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:25.287614 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:53:25.299315 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation May 16 00:53:25.324780 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:53:25.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:25.326286 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:53:25.359170 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:53:25.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:25.394966 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:53:25.399934 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:53:25.399955 kernel: GPT:9289727 != 19775487 May 16 00:53:25.399965 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:53:25.399973 kernel: GPT:9289727 != 19775487 May 16 00:53:25.399983 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:53:25.399991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:53:25.411463 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (545) May 16 00:53:25.412881 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:53:25.413661 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:53:25.421254 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:53:25.424463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:53:25.428223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:53:25.429681 systemd[1]: Starting disk-uuid.service... May 16 00:53:25.435472 disk-uuid[565]: Primary Header is updated. May 16 00:53:25.435472 disk-uuid[565]: Secondary Entries is updated. May 16 00:53:25.435472 disk-uuid[565]: Secondary Header is updated. May 16 00:53:25.443783 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:53:25.450467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:53:26.451294 disk-uuid[566]: The operation has completed successfully. May 16 00:53:26.452198 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:53:26.474084 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:53:26.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.474183 systemd[1]: Finished disk-uuid.service. May 16 00:53:26.475602 systemd[1]: Starting verity-setup.service... May 16 00:53:26.493482 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:53:26.511091 systemd[1]: Found device dev-mapper-usr.device. May 16 00:53:26.513025 systemd[1]: Mounting sysusr-usr.mount... May 16 00:53:26.514734 systemd[1]: Finished verity-setup.service. May 16 00:53:26.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.561464 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:53:26.561745 systemd[1]: Mounted sysusr-usr.mount. May 16 00:53:26.562397 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:53:26.563065 systemd[1]: Starting ignition-setup.service... May 16 00:53:26.564811 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:53:26.572961 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:53:26.572997 kernel: BTRFS info (device vda6): using free space tree May 16 00:53:26.573006 kernel: BTRFS info (device vda6): has skinny extents May 16 00:53:26.580854 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:53:26.586058 systemd[1]: Finished ignition-setup.service. May 16 00:53:26.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.587396 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:53:26.649630 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:53:26.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.650000 audit: BPF prog-id=9 op=LOAD May 16 00:53:26.651608 systemd[1]: Starting systemd-networkd.service... May 16 00:53:26.659422 ignition[651]: Ignition 2.14.0 May 16 00:53:26.659432 ignition[651]: Stage: fetch-offline May 16 00:53:26.659480 ignition[651]: no configs at "/usr/lib/ignition/base.d" May 16 00:53:26.659490 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:53:26.659609 ignition[651]: parsed url from cmdline: "" May 16 00:53:26.659612 ignition[651]: no config URL provided May 16 00:53:26.659617 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:53:26.659623 ignition[651]: no config at "/usr/lib/ignition/user.ign" May 16 00:53:26.659640 ignition[651]: op(1): [started] loading QEMU firmware config module May 16 00:53:26.659647 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:53:26.665107 ignition[651]: op(1): [finished] loading QEMU firmware config module May 16 00:53:26.665132 ignition[651]: QEMU firmware config was not found. Ignoring... May 16 00:53:26.677101 systemd-networkd[741]: lo: Link UP May 16 00:53:26.677112 systemd-networkd[741]: lo: Gained carrier May 16 00:53:26.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.677712 systemd-networkd[741]: Enumeration completed May 16 00:53:26.677821 systemd[1]: Started systemd-networkd.service. May 16 00:53:26.678083 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:53:26.678515 systemd[1]: Reached target network.target. May 16 00:53:26.679548 systemd-networkd[741]: eth0: Link UP May 16 00:53:26.679552 systemd-networkd[741]: eth0: Gained carrier May 16 00:53:26.680239 systemd[1]: Starting iscsiuio.service... May 16 00:53:26.689124 systemd[1]: Started iscsiuio.service. May 16 00:53:26.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.690494 systemd[1]: Starting iscsid.service... May 16 00:53:26.693934 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:53:26.693934 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:53:26.693934 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:53:26.693934 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:53:26.693934 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:53:26.693934 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:53:26.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.699640 systemd[1]: Started iscsid.service. May 16 00:53:26.701078 systemd[1]: Starting dracut-initqueue.service... May 16 00:53:26.705340 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:53:26.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.711582 systemd[1]: Finished dracut-initqueue.service. May 16 00:53:26.712532 systemd[1]: Reached target remote-fs-pre.target. May 16 00:53:26.713383 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:53:26.714241 systemd[1]: Reached target remote-fs.target. May 16 00:53:26.715667 systemd[1]: Starting dracut-pre-mount.service... May 16 00:53:26.723231 systemd[1]: Finished dracut-pre-mount.service. May 16 00:53:26.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.729678 ignition[651]: parsing config with SHA512: 3c31a985af4b3bbd6200f845bd44890581f7b9ee2bc8d8949e48c1d2b5967be2df43121e2f729376445c260870f22e8191aa95600d527de8e25478fcacecd312 May 16 00:53:26.741028 unknown[651]: fetched base config from "system" May 16 00:53:26.741039 unknown[651]: fetched user config from "qemu" May 16 00:53:26.741478 ignition[651]: fetch-offline: fetch-offline passed May 16 00:53:26.741531 ignition[651]: Ignition finished successfully May 16 00:53:26.743216 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:53:26.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.744480 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:53:26.745151 systemd[1]: Starting ignition-kargs.service... May 16 00:53:26.754126 ignition[762]: Ignition 2.14.0 May 16 00:53:26.754136 ignition[762]: Stage: kargs May 16 00:53:26.754222 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 16 00:53:26.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.756915 systemd[1]: Finished ignition-kargs.service. May 16 00:53:26.754232 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:53:26.755113 ignition[762]: kargs: kargs passed May 16 00:53:26.758339 systemd[1]: Starting ignition-disks.service... May 16 00:53:26.755156 ignition[762]: Ignition finished successfully May 16 00:53:26.764784 ignition[768]: Ignition 2.14.0 May 16 00:53:26.764793 ignition[768]: Stage: disks May 16 00:53:26.764877 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 16 00:53:26.764886 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:53:26.766665 systemd[1]: Finished ignition-disks.service. May 16 00:53:26.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.765791 ignition[768]: disks: disks passed May 16 00:53:26.768248 systemd[1]: Reached target initrd-root-device.target. May 16 00:53:26.765834 ignition[768]: Ignition finished successfully May 16 00:53:26.769342 systemd[1]: Reached target local-fs-pre.target. May 16 00:53:26.770376 systemd[1]: Reached target local-fs.target. May 16 00:53:26.771552 systemd[1]: Reached target sysinit.target. May 16 00:53:26.772625 systemd[1]: Reached target basic.target. May 16 00:53:26.774505 systemd[1]: Starting systemd-fsck-root.service... May 16 00:53:26.784724 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:53:26.788632 systemd[1]: Finished systemd-fsck-root.service. May 16 00:53:26.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.790195 systemd[1]: Mounting sysroot.mount... May 16 00:53:26.796460 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:53:26.796953 systemd[1]: Mounted sysroot.mount. May 16 00:53:26.797599 systemd[1]: Reached target initrd-root-fs.target. May 16 00:53:26.799578 systemd[1]: Mounting sysroot-usr.mount... May 16 00:53:26.800346 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:53:26.800383 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:53:26.800407 systemd[1]: Reached target ignition-diskful.target. May 16 00:53:26.802227 systemd[1]: Mounted sysroot-usr.mount. May 16 00:53:26.803552 systemd[1]: Starting initrd-setup-root.service... May 16 00:53:26.807830 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:53:26.811531 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 16 00:53:26.814557 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:53:26.818629 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:53:26.843769 systemd[1]: Finished initrd-setup-root.service. May 16 00:53:26.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.845115 systemd[1]: Starting ignition-mount.service... May 16 00:53:26.846296 systemd[1]: Starting sysroot-boot.service... May 16 00:53:26.851078 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:53:26.860823 ignition[829]: INFO : Ignition 2.14.0 May 16 00:53:26.860823 ignition[829]: INFO : Stage: mount May 16 00:53:26.862186 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:53:26.862186 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:53:26.862186 ignition[829]: INFO : mount: mount passed May 16 00:53:26.862186 ignition[829]: INFO : Ignition finished successfully May 16 00:53:26.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:26.863142 systemd[1]: Finished ignition-mount.service. May 16 00:53:26.864080 systemd[1]: Finished sysroot-boot.service. May 16 00:53:27.521505 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:53:27.527474 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) May 16 00:53:27.529949 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:53:27.529998 kernel: BTRFS info (device vda6): using free space tree May 16 00:53:27.530009 kernel: BTRFS info (device vda6): has skinny extents May 16 00:53:27.532729 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:53:27.534361 systemd[1]: Starting ignition-files.service... May 16 00:53:27.548410 ignition[857]: INFO : Ignition 2.14.0 May 16 00:53:27.548410 ignition[857]: INFO : Stage: files May 16 00:53:27.549801 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:53:27.549801 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:53:27.549801 ignition[857]: DEBUG : files: compiled without relabeling support, skipping May 16 00:53:27.553745 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:53:27.553745 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:53:27.557202 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:53:27.558391 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:53:27.558391 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:53:27.557930 unknown[857]: wrote ssh authorized keys file for user: core May 16 00:53:27.561687 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 16 00:53:27.561687 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 16 00:53:27.798359 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:53:28.132175 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 16 00:53:28.133906 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:53:28.133906 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 16 00:53:28.470154 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:53:28.636770 systemd-networkd[741]: eth0: Gained IPv6LL May 16 00:53:28.668218 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:53:28.669948 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 16 00:53:29.034699 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:53:29.484943 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:53:29.484943 ignition[857]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:53:29.488317 ignition[857]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:53:29.519915 ignition[857]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:53:29.522207 ignition[857]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:53:29.522207 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:53:29.522207 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:53:29.522207 ignition[857]: INFO : files: files passed May 16 00:53:29.522207 ignition[857]: INFO : Ignition finished successfully May 16 00:53:29.531734 kernel: kauditd_printk_skb: 22 callbacks suppressed May 16 00:53:29.531760 kernel: audit: type=1130 audit(1747356809.524:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.522389 systemd[1]: Finished ignition-files.service. May 16 00:53:29.526387 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:53:29.534006 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:53:29.540323 kernel: audit: type=1130 audit(1747356809.534:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.540341 kernel: audit: type=1131 audit(1747356809.534:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.530550 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:53:29.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.544979 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:53:29.546690 kernel: audit: type=1130 audit(1747356809.540:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.531218 systemd[1]: Starting ignition-quench.service... May 16 00:53:29.533948 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:53:29.534027 systemd[1]: Finished ignition-quench.service. May 16 00:53:29.534927 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:53:29.541204 systemd[1]: Reached target ignition-complete.target. May 16 00:53:29.546201 systemd[1]: Starting initrd-parse-etc.service... May 16 00:53:29.558227 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:53:29.558323 systemd[1]: Finished initrd-parse-etc.service. May 16 00:53:29.564905 kernel: audit: type=1130 audit(1747356809.559:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.564922 kernel: audit: type=1131 audit(1747356809.559:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.559794 systemd[1]: Reached target initrd-fs.target. May 16 00:53:29.565517 systemd[1]: Reached target initrd.target. May 16 00:53:29.566617 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:53:29.567294 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:53:29.577324 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:53:29.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.578769 systemd[1]: Starting initrd-cleanup.service... May 16 00:53:29.582078 kernel: audit: type=1130 audit(1747356809.577:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.586242 systemd[1]: Stopped target nss-lookup.target. May 16 00:53:29.587002 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:53:29.588210 systemd[1]: Stopped target timers.target. May 16 00:53:29.589335 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:53:29.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.589433 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:53:29.594573 kernel: audit: type=1131 audit(1747356809.589:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.590531 systemd[1]: Stopped target initrd.target. May 16 00:53:29.594090 systemd[1]: Stopped target basic.target. May 16 00:53:29.595177 systemd[1]: Stopped target ignition-complete.target. May 16 00:53:29.596306 systemd[1]: Stopped target ignition-diskful.target. May 16 00:53:29.597411 systemd[1]: Stopped target initrd-root-device.target. May 16 00:53:29.598658 systemd[1]: Stopped target remote-fs.target. May 16 00:53:29.599812 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:53:29.601011 systemd[1]: Stopped target sysinit.target. May 16 00:53:29.602106 systemd[1]: Stopped target local-fs.target. May 16 00:53:29.603217 systemd[1]: Stopped target local-fs-pre.target. May 16 00:53:29.604317 systemd[1]: Stopped target swap.target. May 16 00:53:29.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.605346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:53:29.610596 kernel: audit: type=1131 audit(1747356809.606:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.605466 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:53:29.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.606689 systemd[1]: Stopped target cryptsetup.target. May 16 00:53:29.615159 kernel: audit: type=1131 audit(1747356809.610:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.609983 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:53:29.610079 systemd[1]: Stopped dracut-initqueue.service. May 16 00:53:29.611288 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:53:29.611376 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:53:29.614782 systemd[1]: Stopped target paths.target. May 16 00:53:29.615749 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:53:29.618478 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:53:29.619258 systemd[1]: Stopped target slices.target. May 16 00:53:29.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.620282 systemd[1]: Stopped target sockets.target. May 16 00:53:29.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.621617 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:53:29.621716 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:53:29.626710 iscsid[747]: iscsid shutting down. May 16 00:53:29.622967 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:53:29.623054 systemd[1]: Stopped ignition-files.service. May 16 00:53:29.624882 systemd[1]: Stopping ignition-mount.service... May 16 00:53:29.627580 systemd[1]: Stopping iscsid.service... May 16 00:53:29.629310 systemd[1]: Stopping sysroot-boot.service... May 16 00:53:29.630388 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:53:29.630532 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:53:29.631643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:53:29.631745 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:53:29.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.636597 ignition[898]: INFO : Ignition 2.14.0 May 16 00:53:29.636597 ignition[898]: INFO : Stage: umount May 16 00:53:29.636597 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:53:29.636597 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:53:29.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.634228 systemd[1]: iscsid.service: Deactivated successfully. May 16 00:53:29.642818 ignition[898]: INFO : umount: umount passed May 16 00:53:29.642818 ignition[898]: INFO : Ignition finished successfully May 16 00:53:29.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.634325 systemd[1]: Stopped iscsid.service. May 16 00:53:29.636229 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:53:29.636294 systemd[1]: Closed iscsid.socket. May 16 00:53:29.637366 systemd[1]: Stopping iscsiuio.service... May 16 00:53:29.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.639991 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:53:29.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.640399 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:53:29.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.640502 systemd[1]: Stopped iscsiuio.service. May 16 00:53:29.641357 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:53:29.642206 systemd[1]: Finished initrd-cleanup.service. May 16 00:53:29.644175 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:53:29.644262 systemd[1]: Stopped ignition-mount.service. May 16 00:53:29.645872 systemd[1]: Stopped target network.target. May 16 00:53:29.647413 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:53:29.647462 systemd[1]: Closed iscsiuio.socket. May 16 00:53:29.648522 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:53:29.648561 systemd[1]: Stopped ignition-disks.service. May 16 00:53:29.651244 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:53:29.651287 systemd[1]: Stopped ignition-kargs.service. May 16 00:53:29.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.652310 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:53:29.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.652355 systemd[1]: Stopped ignition-setup.service. May 16 00:53:29.669000 audit: BPF prog-id=6 op=UNLOAD May 16 00:53:29.654384 systemd[1]: Stopping systemd-networkd.service... May 16 00:53:29.656301 systemd[1]: Stopping systemd-resolved.service... May 16 00:53:29.661506 systemd-networkd[741]: eth0: DHCPv6 lease lost May 16 00:53:29.671000 audit: BPF prog-id=9 op=UNLOAD May 16 00:53:29.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.663846 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:53:29.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.663947 systemd[1]: Stopped systemd-resolved.service. May 16 00:53:29.665256 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:53:29.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.665347 systemd[1]: Stopped systemd-networkd.service. May 16 00:53:29.667958 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:53:29.667988 systemd[1]: Closed systemd-networkd.socket. May 16 00:53:29.669868 systemd[1]: Stopping network-cleanup.service... May 16 00:53:29.671419 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:53:29.671528 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:53:29.672345 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:53:29.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.672384 systemd[1]: Stopped systemd-sysctl.service. May 16 00:53:29.674384 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:53:29.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.674423 systemd[1]: Stopped systemd-modules-load.service. May 16 00:53:29.676306 systemd[1]: Stopping systemd-udevd.service... May 16 00:53:29.680255 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:53:29.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.683616 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:53:29.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.683713 systemd[1]: Stopped network-cleanup.service. May 16 00:53:29.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.685336 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:53:29.685472 systemd[1]: Stopped systemd-udevd.service. May 16 00:53:29.686697 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:53:29.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.686744 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:53:29.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.687930 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:53:29.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.687958 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:53:29.689145 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:53:29.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.689185 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:53:29.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.690312 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:53:29.690347 systemd[1]: Stopped dracut-cmdline.service. May 16 00:53:29.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:29.691790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:53:29.691823 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:53:29.693746 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:53:29.694955 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:53:29.695007 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 16 00:53:29.696972 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:53:29.697014 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:53:29.697769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:53:29.697808 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:53:29.699673 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:53:29.700080 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:53:29.700166 systemd[1]: Stopped sysroot-boot.service. May 16 00:53:29.701037 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:53:29.701114 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:53:29.702244 systemd[1]: Reached target initrd-switch-root.target. May 16 00:53:29.703248 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:53:29.703292 systemd[1]: Stopped initrd-setup-root.service. May 16 00:53:29.705127 systemd[1]: Starting initrd-switch-root.service... May 16 00:53:29.710888 systemd[1]: Switching root. May 16 00:53:29.728873 systemd-journald[290]: Journal stopped May 16 00:53:31.725634 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 16 00:53:31.725685 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:53:31.725697 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:53:31.725707 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:53:31.725719 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:53:31.725728 kernel: SELinux: policy capability open_perms=1 May 16 00:53:31.725742 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:53:31.725760 kernel: SELinux: policy capability always_check_network=0 May 16 00:53:31.725771 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:53:31.725782 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:53:31.725792 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:53:31.725803 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:53:31.725815 systemd[1]: Successfully loaded SELinux policy in 34.691ms. May 16 00:53:31.725833 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.936ms. May 16 00:53:31.725845 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:53:31.725856 systemd[1]: Detected virtualization kvm. May 16 00:53:31.725868 systemd[1]: Detected architecture arm64. May 16 00:53:31.725879 systemd[1]: Detected first boot. May 16 00:53:31.725889 systemd[1]: Initializing machine ID from VM UUID. May 16 00:53:31.725899 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:53:31.725913 systemd[1]: Populated /etc with preset unit settings. May 16 00:53:31.725924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:53:31.725935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:53:31.725948 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:53:31.725959 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:53:31.725970 systemd[1]: Stopped initrd-switch-root.service. May 16 00:53:31.725980 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:53:31.725991 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:53:31.726001 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:53:31.726011 systemd[1]: Created slice system-getty.slice. May 16 00:53:31.726026 systemd[1]: Created slice system-modprobe.slice. May 16 00:53:31.726037 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:53:31.726049 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:53:31.726059 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:53:31.726069 systemd[1]: Created slice user.slice. May 16 00:53:31.726080 systemd[1]: Started systemd-ask-password-console.path. May 16 00:53:31.726090 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:53:31.726101 systemd[1]: Set up automount boot.automount. May 16 00:53:31.726111 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:53:31.726122 systemd[1]: Stopped target initrd-switch-root.target. May 16 00:53:31.726133 systemd[1]: Stopped target initrd-fs.target. May 16 00:53:31.726144 systemd[1]: Stopped target initrd-root-fs.target. May 16 00:53:31.726154 systemd[1]: Reached target integritysetup.target. May 16 00:53:31.726165 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:53:31.726180 systemd[1]: Reached target remote-fs.target. May 16 00:53:31.726190 systemd[1]: Reached target slices.target. May 16 00:53:31.726201 systemd[1]: Reached target swap.target. May 16 00:53:31.726211 systemd[1]: Reached target torcx.target. May 16 00:53:31.726222 systemd[1]: Reached target veritysetup.target. May 16 00:53:31.726232 systemd[1]: Listening on systemd-coredump.socket. May 16 00:53:31.726244 systemd[1]: Listening on systemd-initctl.socket. May 16 00:53:31.726254 systemd[1]: Listening on systemd-networkd.socket. May 16 00:53:31.726265 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:53:31.726275 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:53:31.726286 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:53:31.726296 systemd[1]: Mounting dev-hugepages.mount... May 16 00:53:31.726306 systemd[1]: Mounting dev-mqueue.mount... May 16 00:53:31.726317 systemd[1]: Mounting media.mount... May 16 00:53:31.726327 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:53:31.726337 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:53:31.726347 systemd[1]: Mounting tmp.mount... May 16 00:53:31.726357 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:53:31.726367 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:53:31.726378 systemd[1]: Starting kmod-static-nodes.service... May 16 00:53:31.726388 systemd[1]: Starting modprobe@configfs.service... May 16 00:53:31.726398 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:53:31.726410 systemd[1]: Starting modprobe@drm.service... May 16 00:53:31.726420 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:53:31.726430 systemd[1]: Starting modprobe@fuse.service... May 16 00:53:31.726441 systemd[1]: Starting modprobe@loop.service... May 16 00:53:31.726458 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:53:31.726468 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:53:31.726479 systemd[1]: Stopped systemd-fsck-root.service. May 16 00:53:31.726489 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:53:31.726499 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:53:31.726510 kernel: loop: module loaded May 16 00:53:31.726520 systemd[1]: Stopped systemd-journald.service. May 16 00:53:31.726529 kernel: fuse: init (API version 7.34) May 16 00:53:31.726539 systemd[1]: Starting systemd-journald.service... May 16 00:53:31.726549 systemd[1]: Starting systemd-modules-load.service... May 16 00:53:31.726560 systemd[1]: Starting systemd-network-generator.service... May 16 00:53:31.726570 systemd[1]: Starting systemd-remount-fs.service... May 16 00:53:31.726581 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:53:31.726591 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:53:31.726604 systemd[1]: Stopped verity-setup.service. May 16 00:53:31.726614 systemd[1]: Mounted dev-hugepages.mount. May 16 00:53:31.726624 systemd[1]: Mounted dev-mqueue.mount. May 16 00:53:31.726636 systemd-journald[1003]: Journal started May 16 00:53:31.726676 systemd-journald[1003]: Runtime Journal (/run/log/journal/a8122615a4084c17b713fa9e3d7fb6cd) is 6.0M, max 48.7M, 42.6M free. May 16 00:53:31.726706 systemd[1]: Mounted media.mount. May 16 00:53:29.789000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:53:29.895000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:53:29.895000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:53:29.895000 audit: BPF prog-id=10 op=LOAD May 16 00:53:29.895000 audit: BPF prog-id=10 op=UNLOAD May 16 00:53:29.895000 audit: BPF prog-id=11 op=LOAD May 16 00:53:29.895000 audit: BPF prog-id=11 op=UNLOAD May 16 00:53:29.933000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:53:29.933000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:53:29.933000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:53:29.934000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:53:29.934000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:53:29.934000 audit: CWD cwd="/" May 16 00:53:29.934000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:53:29.934000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:53:29.934000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:53:31.607000 audit: BPF prog-id=12 op=LOAD May 16 00:53:31.607000 audit: BPF prog-id=3 op=UNLOAD May 16 00:53:31.607000 audit: BPF prog-id=13 op=LOAD May 16 00:53:31.608000 audit: BPF prog-id=14 op=LOAD May 16 00:53:31.608000 audit: BPF prog-id=4 op=UNLOAD May 16 00:53:31.608000 audit: BPF prog-id=5 op=UNLOAD May 16 00:53:31.608000 audit: BPF prog-id=15 op=LOAD May 16 00:53:31.608000 audit: BPF prog-id=12 op=UNLOAD May 16 00:53:31.608000 audit: BPF prog-id=16 op=LOAD May 16 00:53:31.609000 audit: BPF prog-id=17 op=LOAD May 16 00:53:31.609000 audit: BPF prog-id=13 op=UNLOAD May 16 00:53:31.609000 audit: BPF prog-id=14 op=UNLOAD May 16 00:53:31.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.616000 audit: BPF prog-id=15 op=UNLOAD May 16 00:53:31.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.706000 audit: BPF prog-id=18 op=LOAD May 16 00:53:31.706000 audit: BPF prog-id=19 op=LOAD May 16 00:53:31.706000 audit: BPF prog-id=20 op=LOAD May 16 00:53:31.706000 audit: BPF prog-id=16 op=UNLOAD May 16 00:53:31.706000 audit: BPF prog-id=17 op=UNLOAD May 16 00:53:31.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.724000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:53:31.724000 audit[1003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdcd4fdb0 a2=4000 a3=1 items=0 ppid=1 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:53:31.724000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:53:31.606979 systemd[1]: Queued start job for default target multi-user.target. May 16 00:53:29.932221 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:53:31.606991 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:53:29.932524 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:53:31.610129 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:53:29.932551 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:53:29.932582 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 16 00:53:29.932592 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="skipped missing lower profile" missing profile=oem May 16 00:53:29.932623 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 16 00:53:29.932634 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 16 00:53:29.932844 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 16 00:53:29.932881 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:53:29.932892 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:53:29.933312 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 16 00:53:29.933346 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 16 00:53:29.933364 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 16 00:53:29.933377 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 16 00:53:29.933394 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 16 00:53:29.933406 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 16 00:53:31.361530 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:53:31.361800 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:53:31.361910 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:53:31.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.362075 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:53:31.362128 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 16 00:53:31.362185 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-16T00:53:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 16 00:53:31.729461 systemd[1]: Started systemd-journald.service. May 16 00:53:31.729724 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:53:31.730423 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:53:31.731165 systemd[1]: Mounted tmp.mount. May 16 00:53:31.731959 systemd[1]: Finished kmod-static-nodes.service. May 16 00:53:31.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.732837 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:53:31.733002 systemd[1]: Finished modprobe@configfs.service. May 16 00:53:31.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.733920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:53:31.734076 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:53:31.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.734949 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:53:31.735111 systemd[1]: Finished modprobe@drm.service. May 16 00:53:31.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.735976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:53:31.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.736922 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:53:31.737835 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:53:31.738055 systemd[1]: Finished modprobe@fuse.service. May 16 00:53:31.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.739038 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:53:31.739570 systemd[1]: Finished modprobe@loop.service. May 16 00:53:31.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.740575 systemd[1]: Finished systemd-modules-load.service. May 16 00:53:31.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.741548 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:53:31.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.742416 systemd[1]: Finished systemd-network-generator.service. May 16 00:53:31.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.743544 systemd[1]: Finished systemd-remount-fs.service. May 16 00:53:31.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.744808 systemd[1]: Reached target network-pre.target. May 16 00:53:31.746675 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:53:31.748323 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:53:31.749119 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:53:31.750739 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:53:31.752572 systemd[1]: Starting systemd-journal-flush.service... May 16 00:53:31.753337 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:53:31.754603 systemd[1]: Starting systemd-random-seed.service... May 16 00:53:31.755265 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:53:31.756597 systemd[1]: Starting systemd-sysctl.service... May 16 00:53:31.758550 systemd[1]: Starting systemd-sysusers.service... May 16 00:53:31.761782 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:53:31.762607 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:53:31.768264 systemd-journald[1003]: Time spent on flushing to /var/log/journal/a8122615a4084c17b713fa9e3d7fb6cd is 14.045ms for 1005 entries. May 16 00:53:31.768264 systemd-journald[1003]: System Journal (/var/log/journal/a8122615a4084c17b713fa9e3d7fb6cd) is 8.0M, max 195.6M, 187.6M free. May 16 00:53:31.804633 systemd-journald[1003]: Received client request to flush runtime journal. May 16 00:53:31.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.771346 systemd[1]: Finished systemd-random-seed.service. May 16 00:53:31.806398 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 00:53:31.772147 systemd[1]: Reached target first-boot-complete.target. May 16 00:53:31.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:31.776375 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:53:31.778426 systemd[1]: Starting systemd-udev-settle.service... May 16 00:53:31.782515 systemd[1]: Finished systemd-sysctl.service. May 16 00:53:31.789768 systemd[1]: Finished systemd-sysusers.service. May 16 00:53:31.791601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:53:31.805845 systemd[1]: Finished systemd-journal-flush.service. May 16 00:53:31.814186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:53:31.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.127412 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:53:32.128000 audit: BPF prog-id=21 op=LOAD May 16 00:53:32.128000 audit: BPF prog-id=22 op=LOAD May 16 00:53:32.128000 audit: BPF prog-id=7 op=UNLOAD May 16 00:53:32.128000 audit: BPF prog-id=8 op=UNLOAD May 16 00:53:32.129414 systemd[1]: Starting systemd-udevd.service... May 16 00:53:32.150537 systemd-udevd[1038]: Using default interface naming scheme 'v252'. May 16 00:53:32.163412 systemd[1]: Started systemd-udevd.service. May 16 00:53:32.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.164000 audit: BPF prog-id=23 op=LOAD May 16 00:53:32.166332 systemd[1]: Starting systemd-networkd.service... May 16 00:53:32.169000 audit: BPF prog-id=24 op=LOAD May 16 00:53:32.169000 audit: BPF prog-id=25 op=LOAD May 16 00:53:32.169000 audit: BPF prog-id=26 op=LOAD May 16 00:53:32.170897 systemd[1]: Starting systemd-userdbd.service... May 16 00:53:32.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.196322 systemd[1]: Started systemd-userdbd.service. May 16 00:53:32.205518 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:53:32.210975 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 16 00:53:32.241030 systemd-networkd[1045]: lo: Link UP May 16 00:53:32.241039 systemd-networkd[1045]: lo: Gained carrier May 16 00:53:32.241351 systemd-networkd[1045]: Enumeration completed May 16 00:53:32.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.241431 systemd[1]: Started systemd-networkd.service. May 16 00:53:32.241454 systemd-networkd[1045]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:53:32.246049 systemd-networkd[1045]: eth0: Link UP May 16 00:53:32.246060 systemd-networkd[1045]: eth0: Gained carrier May 16 00:53:32.267857 systemd[1]: Finished systemd-udev-settle.service. May 16 00:53:32.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.269743 systemd[1]: Starting lvm2-activation-early.service... May 16 00:53:32.274866 systemd-networkd[1045]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:53:32.285336 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:53:32.309240 systemd[1]: Finished lvm2-activation-early.service. May 16 00:53:32.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.310156 systemd[1]: Reached target cryptsetup.target. May 16 00:53:32.311952 systemd[1]: Starting lvm2-activation.service... May 16 00:53:32.315279 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:53:32.345201 systemd[1]: Finished lvm2-activation.service. May 16 00:53:32.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.346033 systemd[1]: Reached target local-fs-pre.target. May 16 00:53:32.346761 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:53:32.346790 systemd[1]: Reached target local-fs.target. May 16 00:53:32.347421 systemd[1]: Reached target machines.target. May 16 00:53:32.349165 systemd[1]: Starting ldconfig.service... May 16 00:53:32.350221 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:53:32.350273 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:53:32.351334 systemd[1]: Starting systemd-boot-update.service... May 16 00:53:32.353066 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:53:32.355233 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:53:32.357720 systemd[1]: Starting systemd-sysext.service... May 16 00:53:32.359416 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) May 16 00:53:32.360600 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:53:32.368007 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:53:32.372562 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:53:32.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.373678 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:53:32.373852 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:53:32.388482 kernel: loop0: detected capacity change from 0 to 211168 May 16 00:53:32.429738 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:53:32.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.437459 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:53:32.442347 systemd-fsck[1084]: fsck.fat 4.2 (2021-01-31) May 16 00:53:32.442347 systemd-fsck[1084]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:53:32.443838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:53:32.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.454462 kernel: loop1: detected capacity change from 0 to 211168 May 16 00:53:32.459468 (sd-sysext)[1088]: Using extensions 'kubernetes'. May 16 00:53:32.460114 (sd-sysext)[1088]: Merged extensions into '/usr'. May 16 00:53:32.475918 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:53:32.477336 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:53:32.479252 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:53:32.481099 systemd[1]: Starting modprobe@loop.service... May 16 00:53:32.481881 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:53:32.482012 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:53:32.482810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:53:32.482946 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:53:32.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.484218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:53:32.484341 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:53:32.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.485625 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:53:32.485748 systemd[1]: Finished modprobe@loop.service. May 16 00:53:32.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.486919 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:53:32.487024 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:53:32.532949 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:53:32.536198 systemd[1]: Finished ldconfig.service. May 16 00:53:32.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.723362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:53:32.725227 systemd[1]: Mounting boot.mount... May 16 00:53:32.726999 systemd[1]: Mounting usr-share-oem.mount... May 16 00:53:32.732927 systemd[1]: Mounted boot.mount. May 16 00:53:32.733704 systemd[1]: Mounted usr-share-oem.mount. May 16 00:53:32.735427 systemd[1]: Finished systemd-sysext.service. May 16 00:53:32.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.737242 systemd[1]: Starting ensure-sysext.service... May 16 00:53:32.739253 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:53:32.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.741955 systemd[1]: Finished systemd-boot-update.service. May 16 00:53:32.744359 systemd[1]: Reloading. May 16 00:53:32.748587 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:53:32.749501 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:53:32.750992 systemd-tmpfiles[1096]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:53:32.773259 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2025-05-16T00:53:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:53:32.773597 /usr/lib/systemd/system-generators/torcx-generator[1116]: time="2025-05-16T00:53:32Z" level=info msg="torcx already run" May 16 00:53:32.842552 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:53:32.842571 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:53:32.858283 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:53:32.899000 audit: BPF prog-id=27 op=LOAD May 16 00:53:32.899000 audit: BPF prog-id=18 op=UNLOAD May 16 00:53:32.899000 audit: BPF prog-id=28 op=LOAD May 16 00:53:32.899000 audit: BPF prog-id=29 op=LOAD May 16 00:53:32.899000 audit: BPF prog-id=19 op=UNLOAD May 16 00:53:32.899000 audit: BPF prog-id=20 op=UNLOAD May 16 00:53:32.900000 audit: BPF prog-id=30 op=LOAD May 16 00:53:32.900000 audit: BPF prog-id=23 op=UNLOAD May 16 00:53:32.901000 audit: BPF prog-id=31 op=LOAD May 16 00:53:32.901000 audit: BPF prog-id=32 op=LOAD May 16 00:53:32.901000 audit: BPF prog-id=21 op=UNLOAD May 16 00:53:32.901000 audit: BPF prog-id=22 op=UNLOAD May 16 00:53:32.902000 audit: BPF prog-id=33 op=LOAD May 16 00:53:32.902000 audit: BPF prog-id=24 op=UNLOAD May 16 00:53:32.902000 audit: BPF prog-id=34 op=LOAD May 16 00:53:32.902000 audit: BPF prog-id=35 op=LOAD May 16 00:53:32.902000 audit: BPF prog-id=25 op=UNLOAD May 16 00:53:32.902000 audit: BPF prog-id=26 op=UNLOAD May 16 00:53:32.905358 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:53:32.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.909438 systemd[1]: Starting audit-rules.service... May 16 00:53:32.911273 systemd[1]: Starting clean-ca-certificates.service... May 16 00:53:32.917000 audit: BPF prog-id=36 op=LOAD May 16 00:53:32.913356 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:53:32.918922 systemd[1]: Starting systemd-resolved.service... May 16 00:53:32.919000 audit: BPF prog-id=37 op=LOAD May 16 00:53:32.921224 systemd[1]: Starting systemd-timesyncd.service... May 16 00:53:32.924342 systemd[1]: Starting systemd-update-utmp.service... May 16 00:53:32.925624 systemd[1]: Finished clean-ca-certificates.service. May 16 00:53:32.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.928253 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:53:32.936874 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:53:32.938304 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:53:32.938000 audit[1166]: SYSTEM_BOOT pid=1166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:53:32.940721 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:53:32.942941 systemd[1]: Starting modprobe@loop.service... May 16 00:53:32.943677 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:53:32.943840 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:53:32.943967 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:53:32.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.944893 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:53:32.946088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:53:32.946207 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:53:32.947209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:53:32.947316 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:53:32.948348 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:53:32.948458 systemd[1]: Finished modprobe@loop.service. May 16 00:53:32.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.951195 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:53:32.951483 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:53:32.952951 systemd[1]: Starting systemd-update-done.service... May 16 00:53:32.955857 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:53:32.957301 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:53:32.959458 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:53:32.961222 systemd[1]: Starting modprobe@loop.service... May 16 00:53:32.961897 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:53:32.962017 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:53:32.962102 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:53:32.962937 systemd[1]: Finished systemd-update-utmp.service. May 16 00:53:32.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.964030 systemd[1]: Finished systemd-update-done.service. May 16 00:53:32.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.965094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:53:32.965212 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:53:32.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.966269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:53:32.966383 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:53:32.967804 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:53:32.967916 systemd[1]: Finished modprobe@loop.service. May 16 00:53:32.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:53:32.969850 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:53:32.969951 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:53:32.972871 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:53:32.974558 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:53:32.975000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:53:32.975000 audit[1182]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff652bfd0 a2=420 a3=0 items=0 ppid=1155 pid=1182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:53:32.975000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:53:32.977910 augenrules[1182]: No rules May 16 00:53:32.976579 systemd[1]: Starting modprobe@drm.service... May 16 00:53:32.978485 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:53:32.980265 systemd[1]: Starting modprobe@loop.service... May 16 00:53:32.981010 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:53:32.981153 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:53:32.982578 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:53:32.983571 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:53:32.984839 systemd[1]: Finished audit-rules.service. May 16 00:53:32.985820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:53:32.985928 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:53:32.986999 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:53:32.987106 systemd[1]: Finished modprobe@drm.service. May 16 00:53:32.988399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:53:32.988545 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:53:32.989561 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:53:32.989665 systemd[1]: Finished modprobe@loop.service. May 16 00:53:32.990695 systemd[1]: Started systemd-timesyncd.service. May 16 00:53:32.991280 systemd-timesyncd[1165]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:53:32.991339 systemd-timesyncd[1165]: Initial clock synchronization to Fri 2025-05-16 00:53:32.617967 UTC. May 16 00:53:32.992423 systemd[1]: Reached target time-set.target. May 16 00:53:32.993296 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:53:32.993334 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:53:32.993655 systemd[1]: Finished ensure-sysext.service. May 16 00:53:32.994345 systemd-resolved[1161]: Positive Trust Anchors: May 16 00:53:32.994604 systemd-resolved[1161]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:53:32.994682 systemd-resolved[1161]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:53:33.007680 systemd-resolved[1161]: Defaulting to hostname 'linux'. May 16 00:53:33.009007 systemd[1]: Started systemd-resolved.service. May 16 00:53:33.009687 systemd[1]: Reached target network.target. May 16 00:53:33.010222 systemd[1]: Reached target nss-lookup.target. May 16 00:53:33.010807 systemd[1]: Reached target sysinit.target. May 16 00:53:33.011406 systemd[1]: Started motdgen.path. May 16 00:53:33.011954 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:53:33.012863 systemd[1]: Started logrotate.timer. May 16 00:53:33.013538 systemd[1]: Started mdadm.timer. May 16 00:53:33.014018 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:53:33.014614 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:53:33.014638 systemd[1]: Reached target paths.target. May 16 00:53:33.015149 systemd[1]: Reached target timers.target. May 16 00:53:33.015952 systemd[1]: Listening on dbus.socket. May 16 00:53:33.017422 systemd[1]: Starting docker.socket... May 16 00:53:33.020198 systemd[1]: Listening on sshd.socket. May 16 00:53:33.020882 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:53:33.021283 systemd[1]: Listening on docker.socket. May 16 00:53:33.021975 systemd[1]: Reached target sockets.target. May 16 00:53:33.022560 systemd[1]: Reached target basic.target. May 16 00:53:33.023109 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:53:33.023140 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:53:33.024052 systemd[1]: Starting containerd.service... May 16 00:53:33.025556 systemd[1]: Starting dbus.service... May 16 00:53:33.027066 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:53:33.028712 systemd[1]: Starting extend-filesystems.service... May 16 00:53:33.029386 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:53:33.030522 systemd[1]: Starting motdgen.service... May 16 00:53:33.032086 systemd[1]: Starting prepare-helm.service... May 16 00:53:33.033787 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:53:33.035560 systemd[1]: Starting sshd-keygen.service... May 16 00:53:33.037395 jq[1198]: false May 16 00:53:33.038260 systemd[1]: Starting systemd-logind.service... May 16 00:53:33.039216 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:53:33.039279 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:53:33.039659 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:53:33.040391 systemd[1]: Starting update-engine.service... May 16 00:53:33.043477 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:53:33.045659 jq[1213]: true May 16 00:53:33.047070 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:53:33.047227 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:53:33.048506 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:53:33.048675 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:53:33.054993 jq[1217]: true May 16 00:53:33.061335 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:53:33.061504 systemd[1]: Finished motdgen.service. May 16 00:53:33.064091 tar[1216]: linux-arm64/LICENSE May 16 00:53:33.064265 tar[1216]: linux-arm64/helm May 16 00:53:33.068340 extend-filesystems[1199]: Found loop1 May 16 00:53:33.068340 extend-filesystems[1199]: Found vda May 16 00:53:33.068340 extend-filesystems[1199]: Found vda1 May 16 00:53:33.068340 extend-filesystems[1199]: Found vda2 May 16 00:53:33.075208 extend-filesystems[1199]: Found vda3 May 16 00:53:33.075208 extend-filesystems[1199]: Found usr May 16 00:53:33.075208 extend-filesystems[1199]: Found vda4 May 16 00:53:33.075208 extend-filesystems[1199]: Found vda6 May 16 00:53:33.075208 extend-filesystems[1199]: Found vda7 May 16 00:53:33.075208 extend-filesystems[1199]: Found vda9 May 16 00:53:33.075208 extend-filesystems[1199]: Checking size of /dev/vda9 May 16 00:53:33.086084 dbus-daemon[1197]: [system] SELinux support is enabled May 16 00:53:33.086237 systemd[1]: Started dbus.service. May 16 00:53:33.088354 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:53:33.088371 systemd[1]: Reached target system-config.target. May 16 00:53:33.089119 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:53:33.089141 systemd[1]: Reached target user-config.target. May 16 00:53:33.094530 extend-filesystems[1199]: Resized partition /dev/vda9 May 16 00:53:33.105001 systemd-logind[1206]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:53:33.106740 systemd-logind[1206]: New seat seat0. May 16 00:53:33.109703 extend-filesystems[1243]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:53:33.116719 systemd[1]: Started systemd-logind.service. May 16 00:53:33.122465 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:53:33.138047 update_engine[1211]: I0516 00:53:33.137814 1211 main.cc:92] Flatcar Update Engine starting May 16 00:53:33.148019 update_engine[1211]: I0516 00:53:33.139889 1211 update_check_scheduler.cc:74] Next update check in 5m14s May 16 00:53:33.139869 systemd[1]: Started update-engine.service. May 16 00:53:33.143299 systemd[1]: Started locksmithd.service. May 16 00:53:33.157461 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:53:33.167394 extend-filesystems[1243]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:53:33.167394 extend-filesystems[1243]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:53:33.167394 extend-filesystems[1243]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:53:33.174525 extend-filesystems[1199]: Resized filesystem in /dev/vda9 May 16 00:53:33.175749 bash[1248]: Updated "/home/core/.ssh/authorized_keys" May 16 00:53:33.169042 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:53:33.169195 systemd[1]: Finished extend-filesystems.service. May 16 00:53:33.172985 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:53:33.176951 env[1218]: time="2025-05-16T00:53:33.176894363Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:53:33.195343 env[1218]: time="2025-05-16T00:53:33.195308563Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:53:33.195558 env[1218]: time="2025-05-16T00:53:33.195537976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:53:33.197988 env[1218]: time="2025-05-16T00:53:33.197958633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:53:33.198070 env[1218]: time="2025-05-16T00:53:33.198055569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:53:33.198344 env[1218]: time="2025-05-16T00:53:33.198320447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:53:33.198429 env[1218]: time="2025-05-16T00:53:33.198413417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:53:33.198514 env[1218]: time="2025-05-16T00:53:33.198498493Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:53:33.198568 env[1218]: time="2025-05-16T00:53:33.198554474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:53:33.198694 env[1218]: time="2025-05-16T00:53:33.198676426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:53:33.199095 env[1218]: time="2025-05-16T00:53:33.199071530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:53:33.199293 env[1218]: time="2025-05-16T00:53:33.199271847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:53:33.199356 env[1218]: time="2025-05-16T00:53:33.199341746Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:53:33.199500 env[1218]: time="2025-05-16T00:53:33.199480553Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:53:33.199582 env[1218]: time="2025-05-16T00:53:33.199568261Z" level=info msg="metadata content store policy set" policy=shared May 16 00:53:33.204240 env[1218]: time="2025-05-16T00:53:33.204212004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:53:33.204487 env[1218]: time="2025-05-16T00:53:33.204456899Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:53:33.204547 env[1218]: time="2025-05-16T00:53:33.204534616Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:53:33.204742 env[1218]: time="2025-05-16T00:53:33.204724293Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:53:33.204814 env[1218]: time="2025-05-16T00:53:33.204799989Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:53:33.204909 env[1218]: time="2025-05-16T00:53:33.204894103Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:53:33.204966 env[1218]: time="2025-05-16T00:53:33.204952562Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:53:33.205592 env[1218]: time="2025-05-16T00:53:33.205423781Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:53:33.205695 env[1218]: time="2025-05-16T00:53:33.205666045Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:53:33.205840 env[1218]: time="2025-05-16T00:53:33.205751808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:53:33.205904 env[1218]: time="2025-05-16T00:53:33.205889967Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:53:33.205956 env[1218]: time="2025-05-16T00:53:33.205943278Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:53:33.206106 env[1218]: time="2025-05-16T00:53:33.206086509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:53:33.206256 env[1218]: time="2025-05-16T00:53:33.206237404Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:53:33.206576 env[1218]: time="2025-05-16T00:53:33.206547508Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:53:33.206619 env[1218]: time="2025-05-16T00:53:33.206590180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206619 env[1218]: time="2025-05-16T00:53:33.206603641Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:53:33.206723 env[1218]: time="2025-05-16T00:53:33.206708051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206749 env[1218]: time="2025-05-16T00:53:33.206726089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206749 env[1218]: time="2025-05-16T00:53:33.206738063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206803 env[1218]: time="2025-05-16T00:53:33.206749312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206803 env[1218]: time="2025-05-16T00:53:33.206760790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206803 env[1218]: time="2025-05-16T00:53:33.206771773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206803 env[1218]: time="2025-05-16T00:53:33.206781955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206803 env[1218]: time="2025-05-16T00:53:33.206792899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206889 env[1218]: time="2025-05-16T00:53:33.206806055Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:53:33.206944 env[1218]: time="2025-05-16T00:53:33.206927588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206968 env[1218]: time="2025-05-16T00:53:33.206948981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:53:33.206968 env[1218]: time="2025-05-16T00:53:33.206961260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:53:33.207003 env[1218]: time="2025-05-16T00:53:33.206972433Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:53:33.207003 env[1218]: time="2025-05-16T00:53:33.206985360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:53:33.207003 env[1218]: time="2025-05-16T00:53:33.206996038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:53:33.207060 env[1218]: time="2025-05-16T00:53:33.207010605Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:53:33.207060 env[1218]: time="2025-05-16T00:53:33.207040464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:53:33.207279 env[1218]: time="2025-05-16T00:53:33.207216680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:53:33.207279 env[1218]: time="2025-05-16T00:53:33.207271173Z" level=info msg="Connect containerd service" May 16 00:53:33.207871 env[1218]: time="2025-05-16T00:53:33.207302214Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:53:33.207871 env[1218]: time="2025-05-16T00:53:33.207855688Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:53:33.208240 env[1218]: time="2025-05-16T00:53:33.208221239Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:53:33.208271 env[1218]: time="2025-05-16T00:53:33.208261965Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:53:33.209129 env[1218]: time="2025-05-16T00:53:33.208299108Z" level=info msg="containerd successfully booted in 0.033456s" May 16 00:53:33.208354 systemd[1]: Started containerd.service. May 16 00:53:33.209408 env[1218]: time="2025-05-16T00:53:33.209369676Z" level=info msg="Start subscribing containerd event" May 16 00:53:33.209469 env[1218]: time="2025-05-16T00:53:33.209423978Z" level=info msg="Start recovering state" May 16 00:53:33.209514 env[1218]: time="2025-05-16T00:53:33.209494259Z" level=info msg="Start event monitor" May 16 00:53:33.209549 env[1218]: time="2025-05-16T00:53:33.209528046Z" level=info msg="Start snapshots syncer" May 16 00:53:33.209549 env[1218]: time="2025-05-16T00:53:33.209539753Z" level=info msg="Start cni network conf syncer for default" May 16 00:53:33.209586 env[1218]: time="2025-05-16T00:53:33.209551117Z" level=info msg="Start streaming server" May 16 00:53:33.220015 locksmithd[1249]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:53:33.494945 tar[1216]: linux-arm64/README.md May 16 00:53:33.499305 systemd[1]: Finished prepare-helm.service. May 16 00:53:33.884552 systemd-networkd[1045]: eth0: Gained IPv6LL May 16 00:53:33.886206 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:53:33.887214 systemd[1]: Reached target network-online.target. May 16 00:53:33.889343 systemd[1]: Starting kubelet.service... May 16 00:53:34.440589 systemd[1]: Started kubelet.service. May 16 00:53:34.810903 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:53:34.829020 systemd[1]: Finished sshd-keygen.service. May 16 00:53:34.831086 systemd[1]: Starting issuegen.service... May 16 00:53:34.835615 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:53:34.835757 systemd[1]: Finished issuegen.service. May 16 00:53:34.837576 systemd[1]: Starting systemd-user-sessions.service... May 16 00:53:34.843646 systemd[1]: Finished systemd-user-sessions.service. May 16 00:53:34.845550 systemd[1]: Started getty@tty1.service. May 16 00:53:34.847259 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:53:34.848167 systemd[1]: Reached target getty.target. May 16 00:53:34.848832 systemd[1]: Reached target multi-user.target. May 16 00:53:34.850119 kubelet[1266]: E0516 00:53:34.850010 1266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:53:34.851567 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:53:34.853137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:53:34.853247 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:53:34.856349 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:53:34.856517 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:53:34.857290 systemd[1]: Startup finished in 570ms (kernel) + 5.182s (initrd) + 5.106s (userspace) = 10.859s. May 16 00:53:36.815252 systemd[1]: Created slice system-sshd.slice. May 16 00:53:36.816353 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:46012.service. May 16 00:53:36.866699 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 46012 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:53:36.868836 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:53:36.878750 systemd-logind[1206]: New session 1 of user core. May 16 00:53:36.879577 systemd[1]: Created slice user-500.slice. May 16 00:53:36.880576 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:53:36.888107 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:53:36.889288 systemd[1]: Starting user@500.service... May 16 00:53:36.891999 (systemd)[1291]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:53:36.947905 systemd[1291]: Queued start job for default target default.target. May 16 00:53:36.948320 systemd[1291]: Reached target paths.target. May 16 00:53:36.948351 systemd[1291]: Reached target sockets.target. May 16 00:53:36.948361 systemd[1291]: Reached target timers.target. May 16 00:53:36.948370 systemd[1291]: Reached target basic.target. May 16 00:53:36.948407 systemd[1291]: Reached target default.target. May 16 00:53:36.948439 systemd[1291]: Startup finished in 51ms. May 16 00:53:36.948486 systemd[1]: Started user@500.service. May 16 00:53:36.949590 systemd[1]: Started session-1.scope. May 16 00:53:36.998345 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:46022.service. May 16 00:53:37.053545 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 46022 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:53:37.054965 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:53:37.059379 systemd[1]: Started session-2.scope. May 16 00:53:37.059647 systemd-logind[1206]: New session 2 of user core. May 16 00:53:37.111696 sshd[1300]: pam_unix(sshd:session): session closed for user core May 16 00:53:37.114392 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:46022.service: Deactivated successfully. May 16 00:53:37.114999 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:53:37.115480 systemd-logind[1206]: Session 2 logged out. Waiting for processes to exit. May 16 00:53:37.116482 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:46026.service. May 16 00:53:37.117083 systemd-logind[1206]: Removed session 2. May 16 00:53:37.162993 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 46026 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:53:37.164016 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:53:37.166895 systemd-logind[1206]: New session 3 of user core. May 16 00:53:37.167639 systemd[1]: Started session-3.scope. May 16 00:53:37.214477 sshd[1306]: pam_unix(sshd:session): session closed for user core May 16 00:53:37.217440 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:46042.service. May 16 00:53:37.217944 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:46026.service: Deactivated successfully. May 16 00:53:37.218510 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:53:37.218988 systemd-logind[1206]: Session 3 logged out. Waiting for processes to exit. May 16 00:53:37.219790 systemd-logind[1206]: Removed session 3. May 16 00:53:37.261365 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 46042 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:53:37.262679 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:53:37.265492 systemd-logind[1206]: New session 4 of user core. May 16 00:53:37.266197 systemd[1]: Started session-4.scope. May 16 00:53:37.317487 sshd[1311]: pam_unix(sshd:session): session closed for user core May 16 00:53:37.319543 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:46042.service: Deactivated successfully. May 16 00:53:37.320041 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:53:37.320495 systemd-logind[1206]: Session 4 logged out. Waiting for processes to exit. May 16 00:53:37.321377 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:46052.service. May 16 00:53:37.321969 systemd-logind[1206]: Removed session 4. May 16 00:53:37.364786 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 46052 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:53:37.366228 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:53:37.369315 systemd-logind[1206]: New session 5 of user core. May 16 00:53:37.370023 systemd[1]: Started session-5.scope. May 16 00:53:37.426317 sudo[1321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:53:37.426564 sudo[1321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:53:37.484553 systemd[1]: Starting docker.service... May 16 00:53:37.565575 env[1333]: time="2025-05-16T00:53:37.565523692Z" level=info msg="Starting up" May 16 00:53:37.566972 env[1333]: time="2025-05-16T00:53:37.566937344Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:53:37.566972 env[1333]: time="2025-05-16T00:53:37.566962788Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:53:37.567067 env[1333]: time="2025-05-16T00:53:37.566981852Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:53:37.567067 env[1333]: time="2025-05-16T00:53:37.567000371Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:53:37.569068 env[1333]: time="2025-05-16T00:53:37.569044418Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:53:37.569068 env[1333]: time="2025-05-16T00:53:37.569063949Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:53:37.569156 env[1333]: time="2025-05-16T00:53:37.569076632Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:53:37.569156 env[1333]: time="2025-05-16T00:53:37.569084803Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:53:37.759821 env[1333]: time="2025-05-16T00:53:37.759694373Z" level=info msg="Loading containers: start." May 16 00:53:37.876476 kernel: Initializing XFRM netlink socket May 16 00:53:37.898156 env[1333]: time="2025-05-16T00:53:37.898115268Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 16 00:53:37.946633 systemd-networkd[1045]: docker0: Link UP May 16 00:53:37.969547 env[1333]: time="2025-05-16T00:53:37.969512568Z" level=info msg="Loading containers: done." May 16 00:53:37.984676 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck764560824-merged.mount: Deactivated successfully. May 16 00:53:37.985645 env[1333]: time="2025-05-16T00:53:37.985616994Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:53:37.985780 env[1333]: time="2025-05-16T00:53:37.985762114Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 16 00:53:37.985869 env[1333]: time="2025-05-16T00:53:37.985854671Z" level=info msg="Daemon has completed initialization" May 16 00:53:37.999685 systemd[1]: Started docker.service. May 16 00:53:38.007905 env[1333]: time="2025-05-16T00:53:38.007799333Z" level=info msg="API listen on /run/docker.sock" May 16 00:53:38.476533 env[1218]: time="2025-05-16T00:53:38.476490512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 16 00:53:39.062426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804312434.mount: Deactivated successfully. May 16 00:53:40.405185 env[1218]: time="2025-05-16T00:53:40.405113087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:40.406259 env[1218]: time="2025-05-16T00:53:40.406230630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:40.408111 env[1218]: time="2025-05-16T00:53:40.408085296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:40.409685 env[1218]: time="2025-05-16T00:53:40.409660144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:40.410513 env[1218]: time="2025-05-16T00:53:40.410484362Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 16 00:53:40.413541 env[1218]: time="2025-05-16T00:53:40.413518534Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 16 00:53:41.851310 env[1218]: time="2025-05-16T00:53:41.851259606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:41.852788 env[1218]: time="2025-05-16T00:53:41.852748036Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:41.855805 env[1218]: time="2025-05-16T00:53:41.855764649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:41.857007 env[1218]: time="2025-05-16T00:53:41.856978552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:41.857944 env[1218]: time="2025-05-16T00:53:41.857903640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 16 00:53:41.858793 env[1218]: time="2025-05-16T00:53:41.858689437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 16 00:53:43.292486 env[1218]: time="2025-05-16T00:53:43.292405768Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:43.293912 env[1218]: time="2025-05-16T00:53:43.293879140Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:43.295760 env[1218]: time="2025-05-16T00:53:43.295734840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:43.297859 env[1218]: time="2025-05-16T00:53:43.297827200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:43.298523 env[1218]: time="2025-05-16T00:53:43.298489214Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 16 00:53:43.299679 env[1218]: time="2025-05-16T00:53:43.299652202Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 16 00:53:44.308714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746955597.mount: Deactivated successfully. May 16 00:53:44.973361 env[1218]: time="2025-05-16T00:53:44.973311458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:44.975027 env[1218]: time="2025-05-16T00:53:44.975000244Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:44.976345 env[1218]: time="2025-05-16T00:53:44.976303496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:44.978013 env[1218]: time="2025-05-16T00:53:44.977987138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:44.978496 env[1218]: time="2025-05-16T00:53:44.978466771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 16 00:53:44.979097 env[1218]: time="2025-05-16T00:53:44.979024477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 16 00:53:44.987632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:53:44.987795 systemd[1]: Stopped kubelet.service. May 16 00:53:44.989116 systemd[1]: Starting kubelet.service... May 16 00:53:45.080607 systemd[1]: Started kubelet.service. May 16 00:53:45.133340 kubelet[1466]: E0516 00:53:45.133293 1466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:53:45.136041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:53:45.136173 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:53:45.654751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809526329.mount: Deactivated successfully. May 16 00:53:46.642527 env[1218]: time="2025-05-16T00:53:46.642435155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:46.644235 env[1218]: time="2025-05-16T00:53:46.644027473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:46.646390 env[1218]: time="2025-05-16T00:53:46.646364418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:46.648069 env[1218]: time="2025-05-16T00:53:46.648037507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:46.649001 env[1218]: time="2025-05-16T00:53:46.648960138Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 16 00:53:46.649432 env[1218]: time="2025-05-16T00:53:46.649369861Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:53:47.102377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016912242.mount: Deactivated successfully. May 16 00:53:47.105401 env[1218]: time="2025-05-16T00:53:47.105367921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:47.107153 env[1218]: time="2025-05-16T00:53:47.107124908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:47.108349 env[1218]: time="2025-05-16T00:53:47.108321080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:47.109556 env[1218]: time="2025-05-16T00:53:47.109518284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:47.110230 env[1218]: time="2025-05-16T00:53:47.110202247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 00:53:47.110668 env[1218]: time="2025-05-16T00:53:47.110638843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 16 00:53:49.915327 env[1218]: time="2025-05-16T00:53:49.915250104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:49.917295 env[1218]: time="2025-05-16T00:53:49.917265024Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:49.919156 env[1218]: time="2025-05-16T00:53:49.919130014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:49.921105 env[1218]: time="2025-05-16T00:53:49.921076472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:49.922101 env[1218]: time="2025-05-16T00:53:49.922057757Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 16 00:53:53.375521 systemd[1]: Stopped kubelet.service. May 16 00:53:53.377418 systemd[1]: Starting kubelet.service... May 16 00:53:53.399773 systemd[1]: Reloading. May 16 00:53:53.452599 /usr/lib/systemd/system-generators/torcx-generator[1524]: time="2025-05-16T00:53:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:53:53.452625 /usr/lib/systemd/system-generators/torcx-generator[1524]: time="2025-05-16T00:53:53Z" level=info msg="torcx already run" May 16 00:53:53.536577 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:53:53.536594 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:53:53.551991 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:53:53.616046 systemd[1]: Started kubelet.service. May 16 00:53:53.617305 systemd[1]: Stopping kubelet.service... May 16 00:53:53.617833 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:53:53.618010 systemd[1]: Stopped kubelet.service. May 16 00:53:53.619565 systemd[1]: Starting kubelet.service... May 16 00:53:53.714773 systemd[1]: Started kubelet.service. May 16 00:53:53.757745 kubelet[1570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:53:53.758091 kubelet[1570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:53:53.758150 kubelet[1570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:53:53.758290 kubelet[1570]: I0516 00:53:53.758262 1570 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:53:54.570649 kubelet[1570]: I0516 00:53:54.570602 1570 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 00:53:54.570649 kubelet[1570]: I0516 00:53:54.570636 1570 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:53:54.570920 kubelet[1570]: I0516 00:53:54.570891 1570 server.go:956] "Client rotation is on, will bootstrap in background" May 16 00:53:54.624224 kubelet[1570]: E0516 00:53:54.624181 1570 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 16 00:53:54.626489 kubelet[1570]: I0516 00:53:54.626469 1570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:53:54.638006 kubelet[1570]: E0516 00:53:54.637963 1570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:53:54.638127 kubelet[1570]: I0516 00:53:54.638112 1570 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:53:54.640674 kubelet[1570]: I0516 00:53:54.640650 1570 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:53:54.641839 kubelet[1570]: I0516 00:53:54.641802 1570 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:53:54.642076 kubelet[1570]: I0516 00:53:54.641929 1570 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:53:54.642260 kubelet[1570]: I0516 00:53:54.642246 1570 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:53:54.642319 kubelet[1570]: I0516 00:53:54.642310 1570 container_manager_linux.go:303] "Creating device plugin manager" May 16 00:53:54.642571 kubelet[1570]: I0516 00:53:54.642555 1570 state_mem.go:36] "Initialized new in-memory state store" May 16 00:53:54.645330 kubelet[1570]: I0516 00:53:54.645307 1570 kubelet.go:480] "Attempting to sync node with API server" May 16 00:53:54.645465 kubelet[1570]: I0516 00:53:54.645425 1570 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:53:54.645561 kubelet[1570]: I0516 00:53:54.645552 1570 kubelet.go:386] "Adding apiserver pod source" May 16 00:53:54.651541 kubelet[1570]: I0516 00:53:54.651506 1570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:53:54.652394 kubelet[1570]: I0516 00:53:54.652374 1570 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:53:54.653085 kubelet[1570]: I0516 00:53:54.653056 1570 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 00:53:54.653185 kubelet[1570]: W0516 00:53:54.653170 1570 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:53:54.654812 kubelet[1570]: E0516 00:53:54.654780 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 00:53:54.654886 kubelet[1570]: E0516 00:53:54.654863 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 00:53:54.655181 kubelet[1570]: I0516 00:53:54.655159 1570 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:53:54.655221 kubelet[1570]: I0516 00:53:54.655194 1570 server.go:1289] "Started kubelet" May 16 00:53:54.655782 kubelet[1570]: I0516 00:53:54.655521 1570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:53:54.655862 kubelet[1570]: I0516 00:53:54.655800 1570 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:53:54.655862 kubelet[1570]: I0516 00:53:54.655845 1570 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:53:54.656726 kubelet[1570]: I0516 00:53:54.656692 1570 server.go:317] "Adding debug handlers to kubelet server" May 16 00:53:54.657378 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 16 00:53:54.657512 kubelet[1570]: I0516 00:53:54.657491 1570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:53:54.660551 kubelet[1570]: I0516 00:53:54.660495 1570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:53:54.662276 kubelet[1570]: E0516 00:53:54.661998 1570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:53:54.662276 kubelet[1570]: I0516 00:53:54.662024 1570 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:53:54.662276 kubelet[1570]: I0516 00:53:54.662212 1570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:53:54.662276 kubelet[1570]: I0516 00:53:54.662274 1570 reconciler.go:26] "Reconciler: start to sync state" May 16 00:53:54.662633 kubelet[1570]: E0516 00:53:54.662611 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 00:53:54.663251 kubelet[1570]: I0516 00:53:54.663000 1570 factory.go:223] Registration of the systemd container factory successfully May 16 00:53:54.663251 kubelet[1570]: I0516 00:53:54.663068 1570 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:53:54.663641 kubelet[1570]: E0516 00:53:54.663412 1570 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:53:54.663932 kubelet[1570]: E0516 00:53:54.663899 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" May 16 00:53:54.664075 kubelet[1570]: I0516 00:53:54.664003 1570 factory.go:223] Registration of the containerd container factory successfully May 16 00:53:54.672212 kubelet[1570]: E0516 00:53:54.671199 1570 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fdbc88178ff7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:53:54.655174527 +0000 UTC m=+0.931017804,LastTimestamp:2025-05-16 00:53:54.655174527 +0000 UTC m=+0.931017804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:53:54.674862 kubelet[1570]: I0516 00:53:54.674839 1570 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:53:54.674862 kubelet[1570]: I0516 00:53:54.674854 1570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:53:54.674962 kubelet[1570]: I0516 00:53:54.674869 1570 state_mem.go:36] "Initialized new in-memory state store" May 16 00:53:54.753786 kubelet[1570]: I0516 00:53:54.753757 1570 policy_none.go:49] "None policy: Start" May 16 00:53:54.753910 kubelet[1570]: I0516 00:53:54.753796 1570 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:53:54.753910 kubelet[1570]: I0516 00:53:54.753810 1570 state_mem.go:35] "Initializing new in-memory state store" May 16 00:53:54.758733 systemd[1]: Created slice kubepods.slice. May 16 00:53:54.760161 kubelet[1570]: I0516 00:53:54.759812 1570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 00:53:54.761748 kubelet[1570]: I0516 00:53:54.761724 1570 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 00:53:54.761930 kubelet[1570]: I0516 00:53:54.761910 1570 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 00:53:54.762093 kubelet[1570]: I0516 00:53:54.762077 1570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:53:54.762166 kubelet[1570]: I0516 00:53:54.762156 1570 kubelet.go:2436] "Starting kubelet main sync loop" May 16 00:53:54.762271 kubelet[1570]: E0516 00:53:54.762253 1570 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:53:54.762626 kubelet[1570]: E0516 00:53:54.762043 1570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:53:54.762626 kubelet[1570]: E0516 00:53:54.762520 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 00:53:54.763213 systemd[1]: Created slice kubepods-burstable.slice. May 16 00:53:54.768309 systemd[1]: Created slice kubepods-besteffort.slice. May 16 00:53:54.778320 kubelet[1570]: E0516 00:53:54.778292 1570 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 00:53:54.778551 kubelet[1570]: I0516 00:53:54.778486 1570 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:53:54.778551 kubelet[1570]: I0516 00:53:54.778506 1570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:53:54.779231 kubelet[1570]: I0516 00:53:54.778736 1570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:53:54.779638 kubelet[1570]: E0516 00:53:54.779617 1570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:53:54.779771 kubelet[1570]: E0516 00:53:54.779756 1570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:53:54.866240 kubelet[1570]: E0516 00:53:54.864711 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" May 16 00:53:54.871115 systemd[1]: Created slice kubepods-burstable-podaff7ae6192ec66855881d13eb9ebee2c.slice. May 16 00:53:54.880250 kubelet[1570]: I0516 00:53:54.880208 1570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:53:54.880585 kubelet[1570]: E0516 00:53:54.880562 1570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 16 00:53:54.883113 kubelet[1570]: E0516 00:53:54.883092 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:54.886603 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 16 00:53:54.888059 kubelet[1570]: E0516 00:53:54.888038 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:54.901637 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 16 00:53:54.903191 kubelet[1570]: E0516 00:53:54.903169 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:55.032805 kubelet[1570]: E0516 00:53:55.032713 1570 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fdbc88178ff7f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:53:54.655174527 +0000 UTC m=+0.931017804,LastTimestamp:2025-05-16 00:53:54.655174527 +0000 UTC m=+0.931017804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:53:55.063878 kubelet[1570]: I0516 00:53:55.063849 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:53:55.063952 kubelet[1570]: I0516 00:53:55.063886 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 00:53:55.063952 kubelet[1570]: I0516 00:53:55.063904 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aff7ae6192ec66855881d13eb9ebee2c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aff7ae6192ec66855881d13eb9ebee2c\") " pod="kube-system/kube-apiserver-localhost" May 16 00:53:55.063952 kubelet[1570]: I0516 00:53:55.063918 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:53:55.064064 kubelet[1570]: I0516 00:53:55.063972 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aff7ae6192ec66855881d13eb9ebee2c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aff7ae6192ec66855881d13eb9ebee2c\") " pod="kube-system/kube-apiserver-localhost" May 16 00:53:55.064064 kubelet[1570]: I0516 00:53:55.064003 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aff7ae6192ec66855881d13eb9ebee2c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aff7ae6192ec66855881d13eb9ebee2c\") " pod="kube-system/kube-apiserver-localhost" May 16 00:53:55.064064 kubelet[1570]: I0516 00:53:55.064023 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:53:55.064064 kubelet[1570]: I0516 00:53:55.064043 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:53:55.064064 kubelet[1570]: I0516 00:53:55.064060 1570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:53:55.081839 kubelet[1570]: I0516 00:53:55.081816 1570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:53:55.082129 kubelet[1570]: E0516 00:53:55.082070 1570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 16 00:53:55.184548 kubelet[1570]: E0516 00:53:55.183955 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:55.184621 env[1218]: time="2025-05-16T00:53:55.184497450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aff7ae6192ec66855881d13eb9ebee2c,Namespace:kube-system,Attempt:0,}" May 16 00:53:55.189041 kubelet[1570]: E0516 00:53:55.189017 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:55.189599 env[1218]: time="2025-05-16T00:53:55.189369256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 16 00:53:55.203723 kubelet[1570]: E0516 00:53:55.203687 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:55.204121 env[1218]: time="2025-05-16T00:53:55.204083231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 16 00:53:55.267035 kubelet[1570]: E0516 00:53:55.266990 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" May 16 00:53:55.484304 kubelet[1570]: I0516 00:53:55.483995 1570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:53:55.484399 kubelet[1570]: E0516 00:53:55.484311 1570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 16 00:53:55.644628 kubelet[1570]: E0516 00:53:55.644570 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 16 00:53:55.703007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596305343.mount: Deactivated successfully. May 16 00:53:55.706790 env[1218]: time="2025-05-16T00:53:55.706737072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.709022 env[1218]: time="2025-05-16T00:53:55.708995435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.710586 env[1218]: time="2025-05-16T00:53:55.710551458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.711221 env[1218]: time="2025-05-16T00:53:55.711193627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.712267 env[1218]: time="2025-05-16T00:53:55.712239715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.713342 env[1218]: time="2025-05-16T00:53:55.713321194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.714865 env[1218]: time="2025-05-16T00:53:55.714815092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.717105 env[1218]: time="2025-05-16T00:53:55.717079720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.717852 env[1218]: time="2025-05-16T00:53:55.717806000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.721553 env[1218]: time="2025-05-16T00:53:55.721525542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.722563 env[1218]: time="2025-05-16T00:53:55.722542741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.724512 env[1218]: time="2025-05-16T00:53:55.724472638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:53:55.736700 kubelet[1570]: E0516 00:53:55.736213 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 00:53:55.753096 env[1218]: time="2025-05-16T00:53:55.753034613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:53:55.753096 env[1218]: time="2025-05-16T00:53:55.753080659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:53:55.753230 env[1218]: time="2025-05-16T00:53:55.753094265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:53:55.753429 env[1218]: time="2025-05-16T00:53:55.753352266Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c72174aee0e4056e2972640604d9749b4f51e74f6b12426c7a961e5c68cdb1b7 pid=1628 runtime=io.containerd.runc.v2 May 16 00:53:55.753564 env[1218]: time="2025-05-16T00:53:55.753512229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:53:55.753610 env[1218]: time="2025-05-16T00:53:55.753545387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:53:55.753610 env[1218]: time="2025-05-16T00:53:55.753570884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:53:55.753773 env[1218]: time="2025-05-16T00:53:55.753696573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53df598664925a08e9a34315ed0feaa3a6f414fea8bc12e2b1c790f4cb7fe333 pid=1629 runtime=io.containerd.runc.v2 May 16 00:53:55.754577 env[1218]: time="2025-05-16T00:53:55.754515383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:53:55.754643 env[1218]: time="2025-05-16T00:53:55.754551494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:53:55.754643 env[1218]: time="2025-05-16T00:53:55.754612862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:53:55.754860 env[1218]: time="2025-05-16T00:53:55.754759139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae128c6df2af639fa6f0730b7d085b78e6e6c96449a782f19a3fb54c64e151e3 pid=1633 runtime=io.containerd.runc.v2 May 16 00:53:55.765639 systemd[1]: Started cri-containerd-53df598664925a08e9a34315ed0feaa3a6f414fea8bc12e2b1c790f4cb7fe333.scope. May 16 00:53:55.767633 systemd[1]: Started cri-containerd-c72174aee0e4056e2972640604d9749b4f51e74f6b12426c7a961e5c68cdb1b7.scope. May 16 00:53:55.777466 systemd[1]: Started cri-containerd-ae128c6df2af639fa6f0730b7d085b78e6e6c96449a782f19a3fb54c64e151e3.scope. May 16 00:53:55.784061 kubelet[1570]: E0516 00:53:55.783943 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 00:53:55.823496 env[1218]: time="2025-05-16T00:53:55.823457326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"c72174aee0e4056e2972640604d9749b4f51e74f6b12426c7a961e5c68cdb1b7\"" May 16 00:53:55.825238 kubelet[1570]: E0516 00:53:55.825208 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:55.830904 env[1218]: time="2025-05-16T00:53:55.830867082Z" level=info msg="CreateContainer within sandbox \"c72174aee0e4056e2972640604d9749b4f51e74f6b12426c7a961e5c68cdb1b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:53:55.831812 env[1218]: time="2025-05-16T00:53:55.831779261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae128c6df2af639fa6f0730b7d085b78e6e6c96449a782f19a3fb54c64e151e3\"" May 16 00:53:55.831911 env[1218]: time="2025-05-16T00:53:55.831880411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aff7ae6192ec66855881d13eb9ebee2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"53df598664925a08e9a34315ed0feaa3a6f414fea8bc12e2b1c790f4cb7fe333\"" May 16 00:53:55.832691 kubelet[1570]: E0516 00:53:55.832666 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:55.833687 kubelet[1570]: E0516 00:53:55.832948 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:55.835975 env[1218]: time="2025-05-16T00:53:55.835896298Z" level=info msg="CreateContainer within sandbox \"ae128c6df2af639fa6f0730b7d085b78e6e6c96449a782f19a3fb54c64e151e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:53:55.838327 env[1218]: time="2025-05-16T00:53:55.838297906Z" level=info msg="CreateContainer within sandbox \"53df598664925a08e9a34315ed0feaa3a6f414fea8bc12e2b1c790f4cb7fe333\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:53:55.845101 env[1218]: time="2025-05-16T00:53:55.845062941Z" level=info msg="CreateContainer within sandbox \"c72174aee0e4056e2972640604d9749b4f51e74f6b12426c7a961e5c68cdb1b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5ac1cbd2e9b169a9f045c5f6a90a181521f6761f55d3e130acd8f53cb43f1118\"" May 16 00:53:55.845813 env[1218]: time="2025-05-16T00:53:55.845788662Z" level=info msg="StartContainer for \"5ac1cbd2e9b169a9f045c5f6a90a181521f6761f55d3e130acd8f53cb43f1118\"" May 16 00:53:55.852203 env[1218]: time="2025-05-16T00:53:55.852148142Z" level=info msg="CreateContainer within sandbox \"ae128c6df2af639fa6f0730b7d085b78e6e6c96449a782f19a3fb54c64e151e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b103074d434f439fc752c541f23086efc358828e8ca4ff0b5abc3e9153f597a4\"" May 16 00:53:55.852726 env[1218]: time="2025-05-16T00:53:55.852642117Z" level=info msg="StartContainer for \"b103074d434f439fc752c541f23086efc358828e8ca4ff0b5abc3e9153f597a4\"" May 16 00:53:55.855423 env[1218]: time="2025-05-16T00:53:55.855389868Z" level=info msg="CreateContainer within sandbox \"53df598664925a08e9a34315ed0feaa3a6f414fea8bc12e2b1c790f4cb7fe333\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7447fd8c4fcf6dbfb5a6a790f3608d91d7f32248401cc471317bac17b92e6f36\"" May 16 00:53:55.856067 env[1218]: time="2025-05-16T00:53:55.856044525Z" level=info msg="StartContainer for \"7447fd8c4fcf6dbfb5a6a790f3608d91d7f32248401cc471317bac17b92e6f36\"" May 16 00:53:55.861419 systemd[1]: Started cri-containerd-5ac1cbd2e9b169a9f045c5f6a90a181521f6761f55d3e130acd8f53cb43f1118.scope. May 16 00:53:55.874951 systemd[1]: Started cri-containerd-b103074d434f439fc752c541f23086efc358828e8ca4ff0b5abc3e9153f597a4.scope. May 16 00:53:55.887292 systemd[1]: Started cri-containerd-7447fd8c4fcf6dbfb5a6a790f3608d91d7f32248401cc471317bac17b92e6f36.scope. May 16 00:53:55.936725 env[1218]: time="2025-05-16T00:53:55.936662930Z" level=info msg="StartContainer for \"b103074d434f439fc752c541f23086efc358828e8ca4ff0b5abc3e9153f597a4\" returns successfully" May 16 00:53:55.943973 env[1218]: time="2025-05-16T00:53:55.943863924Z" level=info msg="StartContainer for \"5ac1cbd2e9b169a9f045c5f6a90a181521f6761f55d3e130acd8f53cb43f1118\" returns successfully" May 16 00:53:55.966304 env[1218]: time="2025-05-16T00:53:55.962387178Z" level=info msg="StartContainer for \"7447fd8c4fcf6dbfb5a6a790f3608d91d7f32248401cc471317bac17b92e6f36\" returns successfully" May 16 00:53:55.972239 kubelet[1570]: E0516 00:53:55.969098 1570 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 16 00:53:56.068693 kubelet[1570]: E0516 00:53:56.068658 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" May 16 00:53:56.286592 kubelet[1570]: I0516 00:53:56.286300 1570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:53:56.772646 kubelet[1570]: E0516 00:53:56.772615 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:56.772945 kubelet[1570]: E0516 00:53:56.772927 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:56.774680 kubelet[1570]: E0516 00:53:56.774656 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:56.775003 kubelet[1570]: E0516 00:53:56.774984 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:56.776088 kubelet[1570]: E0516 00:53:56.776067 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:56.776347 kubelet[1570]: E0516 00:53:56.776331 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:57.778065 kubelet[1570]: E0516 00:53:57.778033 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:57.778381 kubelet[1570]: E0516 00:53:57.778146 1570 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:53:57.778381 kubelet[1570]: E0516 00:53:57.778162 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:57.778381 kubelet[1570]: E0516 00:53:57.778255 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:53:58.282556 kubelet[1570]: E0516 00:53:58.282511 1570 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:53:58.368688 kubelet[1570]: I0516 00:53:58.368656 1570 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:53:58.368874 kubelet[1570]: E0516 00:53:58.368860 1570 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:53:58.464272 kubelet[1570]: I0516 00:53:58.464236 1570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:53:58.469100 kubelet[1570]: E0516 00:53:58.469071 1570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 00:53:58.469100 kubelet[1570]: I0516 00:53:58.469100 1570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:53:58.470686 kubelet[1570]: E0516 00:53:58.470656 1570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 00:53:58.470686 kubelet[1570]: I0516 00:53:58.470681 1570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:53:58.472311 kubelet[1570]: E0516 00:53:58.472287 1570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 00:53:58.653684 kubelet[1570]: I0516 00:53:58.653588 1570 apiserver.go:52] "Watching apiserver" May 16 00:53:58.663281 kubelet[1570]: I0516 00:53:58.663253 1570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:53:58.777916 kubelet[1570]: I0516 00:53:58.777884 1570 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:53:58.779770 kubelet[1570]: E0516 00:53:58.779743 1570 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 00:53:58.780027 kubelet[1570]: E0516 00:53:58.779901 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:00.191892 systemd[1]: Reloading. May 16 00:54:00.239124 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2025-05-16T00:54:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:54:00.239160 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2025-05-16T00:54:00Z" level=info msg="torcx already run" May 16 00:54:00.293432 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:54:00.293463 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:54:00.308759 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:54:00.385872 kubelet[1570]: I0516 00:54:00.385845 1570 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:54:00.386000 systemd[1]: Stopping kubelet.service... May 16 00:54:00.410861 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:54:00.411040 systemd[1]: Stopped kubelet.service. May 16 00:54:00.411083 systemd[1]: kubelet.service: Consumed 1.292s CPU time. May 16 00:54:00.412534 systemd[1]: Starting kubelet.service... May 16 00:54:00.504830 systemd[1]: Started kubelet.service. May 16 00:54:00.537095 kubelet[1920]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:54:00.537095 kubelet[1920]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:54:00.537095 kubelet[1920]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:54:00.537426 kubelet[1920]: I0516 00:54:00.537162 1920 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:54:00.543146 kubelet[1920]: I0516 00:54:00.543097 1920 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 00:54:00.543146 kubelet[1920]: I0516 00:54:00.543126 1920 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:54:00.543503 kubelet[1920]: I0516 00:54:00.543481 1920 server.go:956] "Client rotation is on, will bootstrap in background" May 16 00:54:00.544956 kubelet[1920]: I0516 00:54:00.544934 1920 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 16 00:54:00.547569 kubelet[1920]: I0516 00:54:00.547536 1920 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:54:00.550788 kubelet[1920]: E0516 00:54:00.550740 1920 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:54:00.550788 kubelet[1920]: I0516 00:54:00.550785 1920 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:54:00.553334 kubelet[1920]: I0516 00:54:00.553314 1920 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:54:00.553559 kubelet[1920]: I0516 00:54:00.553538 1920 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:54:00.553704 kubelet[1920]: I0516 00:54:00.553559 1920 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:54:00.553794 kubelet[1920]: I0516 00:54:00.553706 1920 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:54:00.553794 kubelet[1920]: I0516 00:54:00.553714 1920 container_manager_linux.go:303] "Creating device plugin manager" May 16 00:54:00.553794 kubelet[1920]: I0516 00:54:00.553756 1920 state_mem.go:36] "Initialized new in-memory state store" May 16 00:54:00.553918 kubelet[1920]: I0516 00:54:00.553884 1920 kubelet.go:480] "Attempting to sync node with API server" May 16 00:54:00.553918 kubelet[1920]: I0516 00:54:00.553898 1920 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:54:00.553963 kubelet[1920]: I0516 00:54:00.553928 1920 kubelet.go:386] "Adding apiserver pod source" May 16 00:54:00.553963 kubelet[1920]: I0516 00:54:00.553941 1920 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:54:00.554704 kubelet[1920]: I0516 00:54:00.554686 1920 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:54:00.555360 kubelet[1920]: I0516 00:54:00.555341 1920 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 00:54:00.564388 kubelet[1920]: I0516 00:54:00.564368 1920 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:54:00.564562 kubelet[1920]: I0516 00:54:00.564548 1920 server.go:1289] "Started kubelet" May 16 00:54:00.566069 kubelet[1920]: I0516 00:54:00.566036 1920 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:54:00.566545 kubelet[1920]: I0516 00:54:00.566508 1920 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:54:00.567033 kubelet[1920]: I0516 00:54:00.566998 1920 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:54:00.567916 kubelet[1920]: I0516 00:54:00.567888 1920 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:54:00.569405 kubelet[1920]: I0516 00:54:00.569379 1920 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:54:00.576479 kubelet[1920]: I0516 00:54:00.570295 1920 factory.go:223] Registration of the systemd container factory successfully May 16 00:54:00.576479 kubelet[1920]: I0516 00:54:00.570385 1920 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:54:00.576479 kubelet[1920]: I0516 00:54:00.571535 1920 server.go:317] "Adding debug handlers to kubelet server" May 16 00:54:00.576479 kubelet[1920]: I0516 00:54:00.571714 1920 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:54:00.576479 kubelet[1920]: E0516 00:54:00.571907 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:54:00.576479 kubelet[1920]: I0516 00:54:00.572982 1920 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:54:00.576479 kubelet[1920]: I0516 00:54:00.573283 1920 reconciler.go:26] "Reconciler: start to sync state" May 16 00:54:00.579522 kubelet[1920]: I0516 00:54:00.579499 1920 factory.go:223] Registration of the containerd container factory successfully May 16 00:54:00.585794 kubelet[1920]: E0516 00:54:00.585755 1920 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:54:00.609351 kubelet[1920]: I0516 00:54:00.609306 1920 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 00:54:00.610663 kubelet[1920]: I0516 00:54:00.610632 1920 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 00:54:00.610663 kubelet[1920]: I0516 00:54:00.610658 1920 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 00:54:00.610758 kubelet[1920]: I0516 00:54:00.610676 1920 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:54:00.610758 kubelet[1920]: I0516 00:54:00.610683 1920 kubelet.go:2436] "Starting kubelet main sync loop" May 16 00:54:00.610758 kubelet[1920]: E0516 00:54:00.610729 1920 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:54:00.618462 kubelet[1920]: I0516 00:54:00.618427 1920 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:54:00.618588 kubelet[1920]: I0516 00:54:00.618573 1920 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:54:00.618671 kubelet[1920]: I0516 00:54:00.618660 1920 state_mem.go:36] "Initialized new in-memory state store" May 16 00:54:00.618870 kubelet[1920]: I0516 00:54:00.618856 1920 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:54:00.618967 kubelet[1920]: I0516 00:54:00.618943 1920 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:54:00.619050 kubelet[1920]: I0516 00:54:00.619041 1920 policy_none.go:49] "None policy: Start" May 16 00:54:00.619126 kubelet[1920]: I0516 00:54:00.619104 1920 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:54:00.619218 kubelet[1920]: I0516 00:54:00.619208 1920 state_mem.go:35] "Initializing new in-memory state store" May 16 00:54:00.619458 kubelet[1920]: I0516 00:54:00.619431 1920 state_mem.go:75] "Updated machine memory state" May 16 00:54:00.622731 kubelet[1920]: E0516 00:54:00.622711 1920 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 00:54:00.622954 kubelet[1920]: I0516 00:54:00.622936 1920 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:54:00.623050 kubelet[1920]: I0516 00:54:00.623020 1920 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:54:00.623712 kubelet[1920]: I0516 00:54:00.623697 1920 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:54:00.625379 kubelet[1920]: E0516 00:54:00.625354 1920 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:54:00.711988 kubelet[1920]: I0516 00:54:00.711951 1920 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:54:00.712218 kubelet[1920]: I0516 00:54:00.712201 1920 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:54:00.712612 kubelet[1920]: I0516 00:54:00.712583 1920 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:54:00.727151 kubelet[1920]: I0516 00:54:00.727115 1920 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:54:00.732774 kubelet[1920]: I0516 00:54:00.732749 1920 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 00:54:00.732862 kubelet[1920]: I0516 00:54:00.732817 1920 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:54:00.774755 kubelet[1920]: I0516 00:54:00.774641 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aff7ae6192ec66855881d13eb9ebee2c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aff7ae6192ec66855881d13eb9ebee2c\") " pod="kube-system/kube-apiserver-localhost" May 16 00:54:00.774755 kubelet[1920]: I0516 00:54:00.774677 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aff7ae6192ec66855881d13eb9ebee2c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aff7ae6192ec66855881d13eb9ebee2c\") " pod="kube-system/kube-apiserver-localhost" May 16 00:54:00.774755 kubelet[1920]: I0516 00:54:00.774705 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aff7ae6192ec66855881d13eb9ebee2c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aff7ae6192ec66855881d13eb9ebee2c\") " pod="kube-system/kube-apiserver-localhost" May 16 00:54:00.774755 kubelet[1920]: I0516 00:54:00.774727 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:54:00.774755 kubelet[1920]: I0516 00:54:00.774740 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:54:00.774977 kubelet[1920]: I0516 00:54:00.774756 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:54:00.774977 kubelet[1920]: I0516 00:54:00.774773 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 16 00:54:00.774977 kubelet[1920]: I0516 00:54:00.774789 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:54:00.774977 kubelet[1920]: I0516 00:54:00.774803 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:54:01.017878 kubelet[1920]: E0516 00:54:01.017847 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:01.018021 kubelet[1920]: E0516 00:54:01.017901 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:01.018962 kubelet[1920]: E0516 00:54:01.018939 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:01.189415 sudo[1959]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:54:01.189672 sudo[1959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 16 00:54:01.554836 kubelet[1920]: I0516 00:54:01.554795 1920 apiserver.go:52] "Watching apiserver" May 16 00:54:01.573976 kubelet[1920]: I0516 00:54:01.573940 1920 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:54:01.598641 kubelet[1920]: I0516 00:54:01.598589 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5985767640000001 podStartE2EDuration="1.598576764s" podCreationTimestamp="2025-05-16 00:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:54:01.597613914 +0000 UTC m=+1.088857901" watchObservedRunningTime="2025-05-16 00:54:01.598576764 +0000 UTC m=+1.089820711" May 16 00:54:01.612017 kubelet[1920]: I0516 00:54:01.611932 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6119222610000001 podStartE2EDuration="1.611922261s" podCreationTimestamp="2025-05-16 00:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:54:01.605869864 +0000 UTC m=+1.097113890" watchObservedRunningTime="2025-05-16 00:54:01.611922261 +0000 UTC m=+1.103166248" May 16 00:54:01.620304 kubelet[1920]: I0516 00:54:01.620251 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6202400620000001 podStartE2EDuration="1.620240062s" podCreationTimestamp="2025-05-16 00:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:54:01.612288175 +0000 UTC m=+1.103532162" watchObservedRunningTime="2025-05-16 00:54:01.620240062 +0000 UTC m=+1.111484009" May 16 00:54:01.621490 kubelet[1920]: I0516 00:54:01.621463 1920 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:54:01.621712 kubelet[1920]: E0516 00:54:01.621694 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:01.623170 kubelet[1920]: I0516 00:54:01.623145 1920 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:54:01.627644 kubelet[1920]: E0516 00:54:01.627612 1920 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:54:01.627759 kubelet[1920]: E0516 00:54:01.627742 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:01.629934 kubelet[1920]: E0516 00:54:01.629907 1920 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:54:01.630046 kubelet[1920]: E0516 00:54:01.630030 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:01.689161 sudo[1959]: pam_unix(sudo:session): session closed for user root May 16 00:54:02.622939 kubelet[1920]: E0516 00:54:02.622904 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:02.623342 kubelet[1920]: E0516 00:54:02.623080 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:03.709541 sudo[1321]: pam_unix(sudo:session): session closed for user root May 16 00:54:03.711686 sshd[1318]: pam_unix(sshd:session): session closed for user core May 16 00:54:03.715017 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:46052.service: Deactivated successfully. May 16 00:54:03.715787 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:54:03.715939 systemd[1]: session-5.scope: Consumed 5.967s CPU time. May 16 00:54:03.716308 systemd-logind[1206]: Session 5 logged out. Waiting for processes to exit. May 16 00:54:03.716950 systemd-logind[1206]: Removed session 5. May 16 00:54:05.738185 kubelet[1920]: I0516 00:54:05.738154 1920 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:54:05.738817 env[1218]: time="2025-05-16T00:54:05.738781978Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:54:05.739225 kubelet[1920]: I0516 00:54:05.739187 1920 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:54:06.386584 systemd[1]: Created slice kubepods-besteffort-poda27e7ab9_9a10_4c0e_8a19_d2c3e448f3a8.slice. May 16 00:54:06.401803 systemd[1]: Created slice kubepods-burstable-podaded17ed_67fd_4df7_8183_3bab5437f867.slice. May 16 00:54:06.413833 kubelet[1920]: I0516 00:54:06.413782 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-hostproc\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.413833 kubelet[1920]: I0516 00:54:06.413828 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-config-path\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.413986 kubelet[1920]: I0516 00:54:06.413851 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psxgn\" (UniqueName: \"kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-kube-api-access-psxgn\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.413986 kubelet[1920]: I0516 00:54:06.413868 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8-lib-modules\") pod \"kube-proxy-bwqw7\" (UID: \"a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8\") " pod="kube-system/kube-proxy-bwqw7" May 16 00:54:06.413986 kubelet[1920]: I0516 00:54:06.413884 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cni-path\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.413986 kubelet[1920]: I0516 00:54:06.413901 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-etc-cni-netd\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.413986 kubelet[1920]: I0516 00:54:06.413915 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-lib-modules\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.413986 kubelet[1920]: I0516 00:54:06.413929 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-cgroup\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414129 kubelet[1920]: I0516 00:54:06.413942 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aded17ed-67fd-4df7-8183-3bab5437f867-clustermesh-secrets\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414129 kubelet[1920]: I0516 00:54:06.413955 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-net\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414129 kubelet[1920]: I0516 00:54:06.413970 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-kernel\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414129 kubelet[1920]: I0516 00:54:06.413984 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-hubble-tls\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414129 kubelet[1920]: I0516 00:54:06.413998 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8-kube-proxy\") pod \"kube-proxy-bwqw7\" (UID: \"a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8\") " pod="kube-system/kube-proxy-bwqw7" May 16 00:54:06.414241 kubelet[1920]: I0516 00:54:06.414013 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8-xtables-lock\") pod \"kube-proxy-bwqw7\" (UID: \"a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8\") " pod="kube-system/kube-proxy-bwqw7" May 16 00:54:06.414241 kubelet[1920]: I0516 00:54:06.414026 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-bpf-maps\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414241 kubelet[1920]: I0516 00:54:06.414041 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-run\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414241 kubelet[1920]: I0516 00:54:06.414055 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-xtables-lock\") pod \"cilium-g9gh8\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " pod="kube-system/cilium-g9gh8" May 16 00:54:06.414241 kubelet[1920]: I0516 00:54:06.414079 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lr6\" (UniqueName: \"kubernetes.io/projected/a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8-kube-api-access-j4lr6\") pod \"kube-proxy-bwqw7\" (UID: \"a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8\") " pod="kube-system/kube-proxy-bwqw7" May 16 00:54:06.515165 kubelet[1920]: I0516 00:54:06.515119 1920 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:54:06.524128 kubelet[1920]: E0516 00:54:06.524094 1920 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 16 00:54:06.524128 kubelet[1920]: E0516 00:54:06.524121 1920 projected.go:194] Error preparing data for projected volume kube-api-access-j4lr6 for pod kube-system/kube-proxy-bwqw7: configmap "kube-root-ca.crt" not found May 16 00:54:06.524247 kubelet[1920]: E0516 00:54:06.524181 1920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8-kube-api-access-j4lr6 podName:a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8 nodeName:}" failed. No retries permitted until 2025-05-16 00:54:07.02415592 +0000 UTC m=+6.515399907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j4lr6" (UniqueName: "kubernetes.io/projected/a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8-kube-api-access-j4lr6") pod "kube-proxy-bwqw7" (UID: "a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8") : configmap "kube-root-ca.crt" not found May 16 00:54:06.526902 kubelet[1920]: E0516 00:54:06.526861 1920 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 16 00:54:06.526902 kubelet[1920]: E0516 00:54:06.526886 1920 projected.go:194] Error preparing data for projected volume kube-api-access-psxgn for pod kube-system/cilium-g9gh8: configmap "kube-root-ca.crt" not found May 16 00:54:06.527010 kubelet[1920]: E0516 00:54:06.526922 1920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-kube-api-access-psxgn podName:aded17ed-67fd-4df7-8183-3bab5437f867 nodeName:}" failed. No retries permitted until 2025-05-16 00:54:07.026910857 +0000 UTC m=+6.518154844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-psxgn" (UniqueName: "kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-kube-api-access-psxgn") pod "cilium-g9gh8" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867") : configmap "kube-root-ca.crt" not found May 16 00:54:06.887253 systemd[1]: Created slice kubepods-besteffort-pod86addf11_2228_4a20_b0c7_75c96eeb959d.slice. May 16 00:54:06.917737 kubelet[1920]: I0516 00:54:06.917683 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7zsn\" (UniqueName: \"kubernetes.io/projected/86addf11-2228-4a20-b0c7-75c96eeb959d-kube-api-access-j7zsn\") pod \"cilium-operator-6c4d7847fc-f589f\" (UID: \"86addf11-2228-4a20-b0c7-75c96eeb959d\") " pod="kube-system/cilium-operator-6c4d7847fc-f589f" May 16 00:54:06.918025 kubelet[1920]: I0516 00:54:06.917745 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86addf11-2228-4a20-b0c7-75c96eeb959d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-f589f\" (UID: \"86addf11-2228-4a20-b0c7-75c96eeb959d\") " pod="kube-system/cilium-operator-6c4d7847fc-f589f" May 16 00:54:07.190030 kubelet[1920]: E0516 00:54:07.189932 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:07.190820 env[1218]: time="2025-05-16T00:54:07.190750179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f589f,Uid:86addf11-2228-4a20-b0c7-75c96eeb959d,Namespace:kube-system,Attempt:0,}" May 16 00:54:07.210402 env[1218]: time="2025-05-16T00:54:07.210325438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:07.210402 env[1218]: time="2025-05-16T00:54:07.210365610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:07.210402 env[1218]: time="2025-05-16T00:54:07.210376213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:07.210735 env[1218]: time="2025-05-16T00:54:07.210698592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d pid=2025 runtime=io.containerd.runc.v2 May 16 00:54:07.221266 systemd[1]: Started cri-containerd-ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d.scope. May 16 00:54:07.262602 env[1218]: time="2025-05-16T00:54:07.262560859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f589f,Uid:86addf11-2228-4a20-b0c7-75c96eeb959d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d\"" May 16 00:54:07.263406 kubelet[1920]: E0516 00:54:07.263381 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:07.266311 env[1218]: time="2025-05-16T00:54:07.265838467Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:54:07.298900 kubelet[1920]: E0516 00:54:07.298866 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:07.299588 env[1218]: time="2025-05-16T00:54:07.299337327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwqw7,Uid:a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8,Namespace:kube-system,Attempt:0,}" May 16 00:54:07.303925 kubelet[1920]: E0516 00:54:07.303857 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:07.304305 env[1218]: time="2025-05-16T00:54:07.304280047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9gh8,Uid:aded17ed-67fd-4df7-8183-3bab5437f867,Namespace:kube-system,Attempt:0,}" May 16 00:54:07.314964 env[1218]: time="2025-05-16T00:54:07.314885588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:07.314964 env[1218]: time="2025-05-16T00:54:07.314944846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:07.315099 env[1218]: time="2025-05-16T00:54:07.314966173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:07.315142 env[1218]: time="2025-05-16T00:54:07.315113018Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97d9fef72d8a2905ec08db771232502818f14a34aaf20906499e5bd5ea5f88fe pid=2065 runtime=io.containerd.runc.v2 May 16 00:54:07.318762 env[1218]: time="2025-05-16T00:54:07.318710204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:07.318762 env[1218]: time="2025-05-16T00:54:07.318743374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:07.318762 env[1218]: time="2025-05-16T00:54:07.318753217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:07.318931 env[1218]: time="2025-05-16T00:54:07.318889139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91 pid=2082 runtime=io.containerd.runc.v2 May 16 00:54:07.326542 systemd[1]: Started cri-containerd-97d9fef72d8a2905ec08db771232502818f14a34aaf20906499e5bd5ea5f88fe.scope. May 16 00:54:07.335578 systemd[1]: Started cri-containerd-4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91.scope. May 16 00:54:07.369064 env[1218]: time="2025-05-16T00:54:07.369025315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwqw7,Uid:a27e7ab9-9a10-4c0e-8a19-d2c3e448f3a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"97d9fef72d8a2905ec08db771232502818f14a34aaf20906499e5bd5ea5f88fe\"" May 16 00:54:07.370060 kubelet[1920]: E0516 00:54:07.370031 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:07.372357 env[1218]: time="2025-05-16T00:54:07.372320328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9gh8,Uid:aded17ed-67fd-4df7-8183-3bab5437f867,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\"" May 16 00:54:07.373152 kubelet[1920]: E0516 00:54:07.373129 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:07.376069 env[1218]: time="2025-05-16T00:54:07.376022787Z" level=info msg="CreateContainer within sandbox \"97d9fef72d8a2905ec08db771232502818f14a34aaf20906499e5bd5ea5f88fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:54:07.390243 env[1218]: time="2025-05-16T00:54:07.390204708Z" level=info msg="CreateContainer within sandbox \"97d9fef72d8a2905ec08db771232502818f14a34aaf20906499e5bd5ea5f88fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8ad65c2265ba675c1d601a8eaea553b15603da344fdad67715dc97151b18cd8\"" May 16 00:54:07.391249 env[1218]: time="2025-05-16T00:54:07.390773763Z" level=info msg="StartContainer for \"f8ad65c2265ba675c1d601a8eaea553b15603da344fdad67715dc97151b18cd8\"" May 16 00:54:07.410927 systemd[1]: Started cri-containerd-f8ad65c2265ba675c1d601a8eaea553b15603da344fdad67715dc97151b18cd8.scope. May 16 00:54:07.446828 env[1218]: time="2025-05-16T00:54:07.446731729Z" level=info msg="StartContainer for \"f8ad65c2265ba675c1d601a8eaea553b15603da344fdad67715dc97151b18cd8\" returns successfully" May 16 00:54:07.631980 kubelet[1920]: E0516 00:54:07.631933 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:07.642094 kubelet[1920]: I0516 00:54:07.642036 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bwqw7" podStartSLOduration=1.6420229370000001 podStartE2EDuration="1.642022937s" podCreationTimestamp="2025-05-16 00:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:54:07.642014495 +0000 UTC m=+7.133258522" watchObservedRunningTime="2025-05-16 00:54:07.642022937 +0000 UTC m=+7.133266924" May 16 00:54:08.653559 kubelet[1920]: E0516 00:54:08.653513 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:09.635229 kubelet[1920]: E0516 00:54:09.635182 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:09.796380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440343533.mount: Deactivated successfully. May 16 00:54:10.102052 kubelet[1920]: E0516 00:54:10.101611 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:10.362823 kubelet[1920]: E0516 00:54:10.353812 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:10.513869 env[1218]: time="2025-05-16T00:54:10.513815053Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:10.515418 env[1218]: time="2025-05-16T00:54:10.515387942Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:10.517180 env[1218]: time="2025-05-16T00:54:10.517146359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:10.517739 env[1218]: time="2025-05-16T00:54:10.517688980Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 00:54:10.521753 env[1218]: time="2025-05-16T00:54:10.521667493Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:54:10.525138 env[1218]: time="2025-05-16T00:54:10.525087861Z" level=info msg="CreateContainer within sandbox \"ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:54:10.536029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055609177.mount: Deactivated successfully. May 16 00:54:10.538541 env[1218]: time="2025-05-16T00:54:10.538508547Z" level=info msg="CreateContainer within sandbox \"ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\"" May 16 00:54:10.538990 env[1218]: time="2025-05-16T00:54:10.538964546Z" level=info msg="StartContainer for \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\"" May 16 00:54:10.553928 systemd[1]: Started cri-containerd-c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e.scope. May 16 00:54:10.598248 env[1218]: time="2025-05-16T00:54:10.598144837Z" level=info msg="StartContainer for \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\" returns successfully" May 16 00:54:10.640794 kubelet[1920]: E0516 00:54:10.640686 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:10.641365 kubelet[1920]: E0516 00:54:10.641339 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:10.641660 kubelet[1920]: E0516 00:54:10.641634 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:10.653900 kubelet[1920]: I0516 00:54:10.650836 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-f589f" podStartSLOduration=1.397351734 podStartE2EDuration="4.65082416s" podCreationTimestamp="2025-05-16 00:54:06 +0000 UTC" firstStartedPulling="2025-05-16 00:54:07.265571145 +0000 UTC m=+6.756815132" lastFinishedPulling="2025-05-16 00:54:10.519043571 +0000 UTC m=+10.010287558" observedRunningTime="2025-05-16 00:54:10.650573855 +0000 UTC m=+10.141817842" watchObservedRunningTime="2025-05-16 00:54:10.65082416 +0000 UTC m=+10.142068147" May 16 00:54:11.642330 kubelet[1920]: E0516 00:54:11.642242 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:11.642707 kubelet[1920]: E0516 00:54:11.642369 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:18.006005 update_engine[1211]: I0516 00:54:18.005577 1211 update_attempter.cc:509] Updating boot flags... May 16 00:54:18.266796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786018881.mount: Deactivated successfully. May 16 00:54:20.452782 env[1218]: time="2025-05-16T00:54:20.452739817Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:20.454092 env[1218]: time="2025-05-16T00:54:20.454064221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:20.456021 env[1218]: time="2025-05-16T00:54:20.455985755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:20.456600 env[1218]: time="2025-05-16T00:54:20.456576686Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 00:54:20.466690 env[1218]: time="2025-05-16T00:54:20.466653072Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:54:20.475789 env[1218]: time="2025-05-16T00:54:20.475755789Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\"" May 16 00:54:20.477037 env[1218]: time="2025-05-16T00:54:20.476191736Z" level=info msg="StartContainer for \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\"" May 16 00:54:20.502666 systemd[1]: run-containerd-runc-k8s.io-636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263-runc.1Xv44N.mount: Deactivated successfully. May 16 00:54:20.504334 systemd[1]: Started cri-containerd-636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263.scope. May 16 00:54:20.543064 env[1218]: time="2025-05-16T00:54:20.543019950Z" level=info msg="StartContainer for \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\" returns successfully" May 16 00:54:20.609374 systemd[1]: cri-containerd-636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263.scope: Deactivated successfully. May 16 00:54:20.709737 kubelet[1920]: E0516 00:54:20.709632 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:20.758203 env[1218]: time="2025-05-16T00:54:20.758152080Z" level=info msg="shim disconnected" id=636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263 May 16 00:54:20.758203 env[1218]: time="2025-05-16T00:54:20.758199088Z" level=warning msg="cleaning up after shim disconnected" id=636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263 namespace=k8s.io May 16 00:54:20.758203 env[1218]: time="2025-05-16T00:54:20.758210569Z" level=info msg="cleaning up dead shim" May 16 00:54:20.764960 env[1218]: time="2025-05-16T00:54:20.764917038Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2417 runtime=io.containerd.runc.v2\n" May 16 00:54:21.473722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263-rootfs.mount: Deactivated successfully. May 16 00:54:21.667360 kubelet[1920]: E0516 00:54:21.667332 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:21.676020 env[1218]: time="2025-05-16T00:54:21.675984645Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:54:21.691480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044894693.mount: Deactivated successfully. May 16 00:54:21.693977 env[1218]: time="2025-05-16T00:54:21.693938388Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\"" May 16 00:54:21.694677 env[1218]: time="2025-05-16T00:54:21.694650812Z" level=info msg="StartContainer for \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\"" May 16 00:54:21.709225 systemd[1]: Started cri-containerd-482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5.scope. May 16 00:54:21.741030 env[1218]: time="2025-05-16T00:54:21.740988463Z" level=info msg="StartContainer for \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\" returns successfully" May 16 00:54:21.766107 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:54:21.766308 systemd[1]: Stopped systemd-sysctl.service. May 16 00:54:21.766479 systemd[1]: Stopping systemd-sysctl.service... May 16 00:54:21.767928 systemd[1]: Starting systemd-sysctl.service... May 16 00:54:21.768860 systemd[1]: cri-containerd-482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5.scope: Deactivated successfully. May 16 00:54:21.778358 systemd[1]: Finished systemd-sysctl.service. May 16 00:54:21.790846 env[1218]: time="2025-05-16T00:54:21.790797781Z" level=info msg="shim disconnected" id=482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5 May 16 00:54:21.790993 env[1218]: time="2025-05-16T00:54:21.790851628Z" level=warning msg="cleaning up after shim disconnected" id=482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5 namespace=k8s.io May 16 00:54:21.790993 env[1218]: time="2025-05-16T00:54:21.790861310Z" level=info msg="cleaning up dead shim" May 16 00:54:21.797322 env[1218]: time="2025-05-16T00:54:21.797290969Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2479 runtime=io.containerd.runc.v2\n" May 16 00:54:22.473664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5-rootfs.mount: Deactivated successfully. May 16 00:54:22.670435 kubelet[1920]: E0516 00:54:22.670397 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:22.682511 env[1218]: time="2025-05-16T00:54:22.679883502Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:54:22.692787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708083977.mount: Deactivated successfully. May 16 00:54:22.698409 env[1218]: time="2025-05-16T00:54:22.698372077Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\"" May 16 00:54:22.700015 env[1218]: time="2025-05-16T00:54:22.699135303Z" level=info msg="StartContainer for \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\"" May 16 00:54:22.714558 systemd[1]: Started cri-containerd-4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329.scope. May 16 00:54:22.760953 env[1218]: time="2025-05-16T00:54:22.760913465Z" level=info msg="StartContainer for \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\" returns successfully" May 16 00:54:22.773593 systemd[1]: cri-containerd-4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329.scope: Deactivated successfully. May 16 00:54:22.794047 env[1218]: time="2025-05-16T00:54:22.793993711Z" level=info msg="shim disconnected" id=4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329 May 16 00:54:22.794301 env[1218]: time="2025-05-16T00:54:22.794270110Z" level=warning msg="cleaning up after shim disconnected" id=4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329 namespace=k8s.io May 16 00:54:22.794373 env[1218]: time="2025-05-16T00:54:22.794359402Z" level=info msg="cleaning up dead shim" May 16 00:54:22.803695 env[1218]: time="2025-05-16T00:54:22.803648136Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2535 runtime=io.containerd.runc.v2\n" May 16 00:54:23.673908 kubelet[1920]: E0516 00:54:23.673880 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:23.677520 env[1218]: time="2025-05-16T00:54:23.677479851Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:54:23.689769 env[1218]: time="2025-05-16T00:54:23.689723077Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\"" May 16 00:54:23.690436 env[1218]: time="2025-05-16T00:54:23.690406648Z" level=info msg="StartContainer for \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\"" May 16 00:54:23.708231 systemd[1]: Started cri-containerd-93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be.scope. May 16 00:54:23.733586 env[1218]: time="2025-05-16T00:54:23.733537816Z" level=info msg="StartContainer for \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\" returns successfully" May 16 00:54:23.733682 systemd[1]: cri-containerd-93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be.scope: Deactivated successfully. May 16 00:54:23.754112 env[1218]: time="2025-05-16T00:54:23.754069343Z" level=info msg="shim disconnected" id=93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be May 16 00:54:23.754272 env[1218]: time="2025-05-16T00:54:23.754114309Z" level=warning msg="cleaning up after shim disconnected" id=93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be namespace=k8s.io May 16 00:54:23.754272 env[1218]: time="2025-05-16T00:54:23.754123070Z" level=info msg="cleaning up dead shim" May 16 00:54:23.760075 env[1218]: time="2025-05-16T00:54:23.760024494Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2589 runtime=io.containerd.runc.v2\n" May 16 00:54:24.473769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be-rootfs.mount: Deactivated successfully. May 16 00:54:24.677604 kubelet[1920]: E0516 00:54:24.677574 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:24.681073 env[1218]: time="2025-05-16T00:54:24.681032139Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:54:24.693070 env[1218]: time="2025-05-16T00:54:24.693031940Z" level=info msg="CreateContainer within sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\"" May 16 00:54:24.693581 env[1218]: time="2025-05-16T00:54:24.693556167Z" level=info msg="StartContainer for \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\"" May 16 00:54:24.710668 systemd[1]: Started cri-containerd-3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3.scope. May 16 00:54:24.748438 env[1218]: time="2025-05-16T00:54:24.748380877Z" level=info msg="StartContainer for \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\" returns successfully" May 16 00:54:24.837463 kubelet[1920]: I0516 00:54:24.837198 1920 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:54:24.870623 systemd[1]: Created slice kubepods-burstable-podbb56b92a_6a46_4802_9861_321a2894b0c5.slice. May 16 00:54:24.876583 systemd[1]: Created slice kubepods-burstable-podab287f9e_f3c0_4f7c_8193_f41c5304912a.slice. May 16 00:54:24.952617 kubelet[1920]: I0516 00:54:24.950778 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab287f9e-f3c0-4f7c-8193-f41c5304912a-config-volume\") pod \"coredns-674b8bbfcf-27n6r\" (UID: \"ab287f9e-f3c0-4f7c-8193-f41c5304912a\") " pod="kube-system/coredns-674b8bbfcf-27n6r" May 16 00:54:24.952617 kubelet[1920]: I0516 00:54:24.950816 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb56b92a-6a46-4802-9861-321a2894b0c5-config-volume\") pod \"coredns-674b8bbfcf-nk2k2\" (UID: \"bb56b92a-6a46-4802-9861-321a2894b0c5\") " pod="kube-system/coredns-674b8bbfcf-nk2k2" May 16 00:54:24.952617 kubelet[1920]: I0516 00:54:24.950835 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmlgw\" (UniqueName: \"kubernetes.io/projected/ab287f9e-f3c0-4f7c-8193-f41c5304912a-kube-api-access-wmlgw\") pod \"coredns-674b8bbfcf-27n6r\" (UID: \"ab287f9e-f3c0-4f7c-8193-f41c5304912a\") " pod="kube-system/coredns-674b8bbfcf-27n6r" May 16 00:54:24.952617 kubelet[1920]: I0516 00:54:24.950887 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvxrp\" (UniqueName: \"kubernetes.io/projected/bb56b92a-6a46-4802-9861-321a2894b0c5-kube-api-access-vvxrp\") pod \"coredns-674b8bbfcf-nk2k2\" (UID: \"bb56b92a-6a46-4802-9861-321a2894b0c5\") " pod="kube-system/coredns-674b8bbfcf-nk2k2" May 16 00:54:25.006469 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:54:25.175105 kubelet[1920]: E0516 00:54:25.175063 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:25.176068 env[1218]: time="2025-05-16T00:54:25.176028899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nk2k2,Uid:bb56b92a-6a46-4802-9861-321a2894b0c5,Namespace:kube-system,Attempt:0,}" May 16 00:54:25.179970 kubelet[1920]: E0516 00:54:25.179912 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:25.180388 env[1218]: time="2025-05-16T00:54:25.180354263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-27n6r,Uid:ab287f9e-f3c0-4f7c-8193-f41c5304912a,Namespace:kube-system,Attempt:0,}" May 16 00:54:25.335476 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:54:25.477033 systemd[1]: run-containerd-runc-k8s.io-3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3-runc.79SEEa.mount: Deactivated successfully. May 16 00:54:25.681260 kubelet[1920]: E0516 00:54:25.681228 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:25.935881 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:36644.service. May 16 00:54:25.981908 sshd[2761]: Accepted publickey for core from 10.0.0.1 port 36644 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:25.983341 sshd[2761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:25.986507 systemd-logind[1206]: New session 6 of user core. May 16 00:54:25.987259 systemd[1]: Started session-6.scope. May 16 00:54:26.109793 sshd[2761]: pam_unix(sshd:session): session closed for user core May 16 00:54:26.112112 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:36644.service: Deactivated successfully. May 16 00:54:26.112825 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:54:26.113312 systemd-logind[1206]: Session 6 logged out. Waiting for processes to exit. May 16 00:54:26.114070 systemd-logind[1206]: Removed session 6. May 16 00:54:26.683309 kubelet[1920]: E0516 00:54:26.683277 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:26.953564 systemd-networkd[1045]: cilium_host: Link UP May 16 00:54:26.954497 systemd-networkd[1045]: cilium_net: Link UP May 16 00:54:26.957491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 16 00:54:26.957570 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 16 00:54:26.956035 systemd-networkd[1045]: cilium_net: Gained carrier May 16 00:54:26.957157 systemd-networkd[1045]: cilium_host: Gained carrier May 16 00:54:27.033327 systemd-networkd[1045]: cilium_vxlan: Link UP May 16 00:54:27.033333 systemd-networkd[1045]: cilium_vxlan: Gained carrier May 16 00:54:27.337494 kernel: NET: Registered PF_ALG protocol family May 16 00:54:27.580575 systemd-networkd[1045]: cilium_host: Gained IPv6LL May 16 00:54:27.685104 kubelet[1920]: E0516 00:54:27.684971 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:27.906140 systemd-networkd[1045]: lxc_health: Link UP May 16 00:54:27.914903 systemd-networkd[1045]: lxc_health: Gained carrier May 16 00:54:27.915469 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:54:27.964583 systemd-networkd[1045]: cilium_net: Gained IPv6LL May 16 00:54:28.298609 systemd-networkd[1045]: lxc620e6f34e60e: Link UP May 16 00:54:28.314523 kernel: eth0: renamed from tmpad3d5 May 16 00:54:28.329485 kernel: eth0: renamed from tmpdc92b May 16 00:54:28.336471 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:54:28.336544 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc620e6f34e60e: link becomes ready May 16 00:54:28.336580 systemd-networkd[1045]: lxca8174d1af905: Link UP May 16 00:54:28.337342 systemd-networkd[1045]: lxc620e6f34e60e: Gained carrier May 16 00:54:28.338786 systemd-networkd[1045]: lxca8174d1af905: Gained carrier May 16 00:54:28.339470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca8174d1af905: link becomes ready May 16 00:54:28.476570 systemd-networkd[1045]: cilium_vxlan: Gained IPv6LL May 16 00:54:28.908540 kubelet[1920]: E0516 00:54:28.908500 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:29.180660 systemd-networkd[1045]: lxc_health: Gained IPv6LL May 16 00:54:29.326319 kubelet[1920]: I0516 00:54:29.326238 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g9gh8" podStartSLOduration=10.242560389 podStartE2EDuration="23.326224571s" podCreationTimestamp="2025-05-16 00:54:06 +0000 UTC" firstStartedPulling="2025-05-16 00:54:07.373908257 +0000 UTC m=+6.865152244" lastFinishedPulling="2025-05-16 00:54:20.457572439 +0000 UTC m=+19.948816426" observedRunningTime="2025-05-16 00:54:25.697420242 +0000 UTC m=+25.188664229" watchObservedRunningTime="2025-05-16 00:54:29.326224571 +0000 UTC m=+28.817468518" May 16 00:54:29.436600 systemd-networkd[1045]: lxc620e6f34e60e: Gained IPv6LL May 16 00:54:29.690710 kubelet[1920]: E0516 00:54:29.690557 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:30.204605 systemd-networkd[1045]: lxca8174d1af905: Gained IPv6LL May 16 00:54:30.691933 kubelet[1920]: E0516 00:54:30.691888 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:31.114791 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:36658.service. May 16 00:54:31.161848 sshd[3154]: Accepted publickey for core from 10.0.0.1 port 36658 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:31.163535 sshd[3154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:31.167278 systemd-logind[1206]: New session 7 of user core. May 16 00:54:31.167834 systemd[1]: Started session-7.scope. May 16 00:54:31.297332 sshd[3154]: pam_unix(sshd:session): session closed for user core May 16 00:54:31.299898 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:36658.service: Deactivated successfully. May 16 00:54:31.300623 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:54:31.301152 systemd-logind[1206]: Session 7 logged out. Waiting for processes to exit. May 16 00:54:31.301970 systemd-logind[1206]: Removed session 7. May 16 00:54:31.888481 env[1218]: time="2025-05-16T00:54:31.888330882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:31.888892 env[1218]: time="2025-05-16T00:54:31.888418370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:31.888892 env[1218]: time="2025-05-16T00:54:31.888484736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:31.888892 env[1218]: time="2025-05-16T00:54:31.888494617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:31.889029 env[1218]: time="2025-05-16T00:54:31.888931258Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad3d52b8de1e0d557f677e4e0c4f3be1b02ca6e81a2a1d6deaa1acd8e99b89c5 pid=3187 runtime=io.containerd.runc.v2 May 16 00:54:31.889148 env[1218]: time="2025-05-16T00:54:31.889119516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:31.889343 env[1218]: time="2025-05-16T00:54:31.889316334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:31.889614 env[1218]: time="2025-05-16T00:54:31.889582279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc92bc0589e1c50c20dca8c5bdb5448068a282ec9015d8ad3e27a776eff2cccc pid=3188 runtime=io.containerd.runc.v2 May 16 00:54:31.907782 systemd[1]: Started cri-containerd-ad3d52b8de1e0d557f677e4e0c4f3be1b02ca6e81a2a1d6deaa1acd8e99b89c5.scope. May 16 00:54:31.908709 systemd[1]: Started cri-containerd-dc92bc0589e1c50c20dca8c5bdb5448068a282ec9015d8ad3e27a776eff2cccc.scope. May 16 00:54:31.954804 systemd-resolved[1161]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:54:31.955872 systemd-resolved[1161]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:54:31.976981 env[1218]: time="2025-05-16T00:54:31.976935316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nk2k2,Uid:bb56b92a-6a46-4802-9861-321a2894b0c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc92bc0589e1c50c20dca8c5bdb5448068a282ec9015d8ad3e27a776eff2cccc\"" May 16 00:54:31.978067 kubelet[1920]: E0516 00:54:31.977588 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:31.979948 env[1218]: time="2025-05-16T00:54:31.979900035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-27n6r,Uid:ab287f9e-f3c0-4f7c-8193-f41c5304912a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3d52b8de1e0d557f677e4e0c4f3be1b02ca6e81a2a1d6deaa1acd8e99b89c5\"" May 16 00:54:31.981747 kubelet[1920]: E0516 00:54:31.981715 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:31.985566 env[1218]: time="2025-05-16T00:54:31.985511401Z" level=info msg="CreateContainer within sandbox \"dc92bc0589e1c50c20dca8c5bdb5448068a282ec9015d8ad3e27a776eff2cccc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:54:31.990138 env[1218]: time="2025-05-16T00:54:31.990068229Z" level=info msg="CreateContainer within sandbox \"ad3d52b8de1e0d557f677e4e0c4f3be1b02ca6e81a2a1d6deaa1acd8e99b89c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:54:32.003109 env[1218]: time="2025-05-16T00:54:32.003057278Z" level=info msg="CreateContainer within sandbox \"dc92bc0589e1c50c20dca8c5bdb5448068a282ec9015d8ad3e27a776eff2cccc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c7a386b9001aded99177077a669b11ef542a02baa69f186f4d016781bbe8c67\"" May 16 00:54:32.003719 env[1218]: time="2025-05-16T00:54:32.003666133Z" level=info msg="StartContainer for \"1c7a386b9001aded99177077a669b11ef542a02baa69f186f4d016781bbe8c67\"" May 16 00:54:32.017963 systemd[1]: Started cri-containerd-1c7a386b9001aded99177077a669b11ef542a02baa69f186f4d016781bbe8c67.scope. May 16 00:54:32.025023 env[1218]: time="2025-05-16T00:54:32.024978656Z" level=info msg="CreateContainer within sandbox \"ad3d52b8de1e0d557f677e4e0c4f3be1b02ca6e81a2a1d6deaa1acd8e99b89c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c04d0c30d192c505379e38ebcb527e5daa306782a215827a76721b0a5fa7b80\"" May 16 00:54:32.026060 env[1218]: time="2025-05-16T00:54:32.025828813Z" level=info msg="StartContainer for \"9c04d0c30d192c505379e38ebcb527e5daa306782a215827a76721b0a5fa7b80\"" May 16 00:54:32.040872 systemd[1]: Started cri-containerd-9c04d0c30d192c505379e38ebcb527e5daa306782a215827a76721b0a5fa7b80.scope. May 16 00:54:32.101219 env[1218]: time="2025-05-16T00:54:32.101164771Z" level=info msg="StartContainer for \"1c7a386b9001aded99177077a669b11ef542a02baa69f186f4d016781bbe8c67\" returns successfully" May 16 00:54:32.118514 env[1218]: time="2025-05-16T00:54:32.118467573Z" level=info msg="StartContainer for \"9c04d0c30d192c505379e38ebcb527e5daa306782a215827a76721b0a5fa7b80\" returns successfully" May 16 00:54:32.699110 kubelet[1920]: E0516 00:54:32.699071 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:32.700542 kubelet[1920]: E0516 00:54:32.700512 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:32.707330 kubelet[1920]: I0516 00:54:32.707289 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-27n6r" podStartSLOduration=26.707278666 podStartE2EDuration="26.707278666s" podCreationTimestamp="2025-05-16 00:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:54:32.706798942 +0000 UTC m=+32.198042929" watchObservedRunningTime="2025-05-16 00:54:32.707278666 +0000 UTC m=+32.198522613" May 16 00:54:32.724303 kubelet[1920]: I0516 00:54:32.724238 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nk2k2" podStartSLOduration=26.724223515 podStartE2EDuration="26.724223515s" podCreationTimestamp="2025-05-16 00:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:54:32.724012496 +0000 UTC m=+32.215256483" watchObservedRunningTime="2025-05-16 00:54:32.724223515 +0000 UTC m=+32.215467502" May 16 00:54:32.892629 systemd[1]: run-containerd-runc-k8s.io-ad3d52b8de1e0d557f677e4e0c4f3be1b02ca6e81a2a1d6deaa1acd8e99b89c5-runc.I0Zpcb.mount: Deactivated successfully. May 16 00:54:33.702326 kubelet[1920]: E0516 00:54:33.702298 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:33.702695 kubelet[1920]: E0516 00:54:33.702379 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:34.704293 kubelet[1920]: E0516 00:54:34.704258 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:36.302279 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:39188.service. May 16 00:54:36.348865 sshd[3339]: Accepted publickey for core from 10.0.0.1 port 39188 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:36.350643 sshd[3339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:36.354758 systemd-logind[1206]: New session 8 of user core. May 16 00:54:36.354934 systemd[1]: Started session-8.scope. May 16 00:54:36.475847 sshd[3339]: pam_unix(sshd:session): session closed for user core May 16 00:54:36.478431 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:39188.service: Deactivated successfully. May 16 00:54:36.479180 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:54:36.479793 systemd-logind[1206]: Session 8 logged out. Waiting for processes to exit. May 16 00:54:36.480635 systemd-logind[1206]: Removed session 8. May 16 00:54:41.481134 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:39194.service. May 16 00:54:41.524313 sshd[3355]: Accepted publickey for core from 10.0.0.1 port 39194 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:41.525859 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:41.529525 systemd-logind[1206]: New session 9 of user core. May 16 00:54:41.530142 systemd[1]: Started session-9.scope. May 16 00:54:41.636599 sshd[3355]: pam_unix(sshd:session): session closed for user core May 16 00:54:41.639217 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:39194.service: Deactivated successfully. May 16 00:54:41.639821 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:54:41.640356 systemd-logind[1206]: Session 9 logged out. Waiting for processes to exit. May 16 00:54:41.641414 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:39210.service. May 16 00:54:41.642113 systemd-logind[1206]: Removed session 9. May 16 00:54:41.686980 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 39210 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:41.688427 sshd[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:41.691808 systemd-logind[1206]: New session 10 of user core. May 16 00:54:41.692560 systemd[1]: Started session-10.scope. May 16 00:54:41.837967 sshd[3369]: pam_unix(sshd:session): session closed for user core May 16 00:54:41.839878 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:39212.service. May 16 00:54:41.848016 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:39210.service: Deactivated successfully. May 16 00:54:41.848699 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:54:41.849687 systemd-logind[1206]: Session 10 logged out. Waiting for processes to exit. May 16 00:54:41.852174 systemd-logind[1206]: Removed session 10. May 16 00:54:41.892392 sshd[3379]: Accepted publickey for core from 10.0.0.1 port 39212 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:41.893861 sshd[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:41.896916 systemd-logind[1206]: New session 11 of user core. May 16 00:54:41.897730 systemd[1]: Started session-11.scope. May 16 00:54:42.005970 sshd[3379]: pam_unix(sshd:session): session closed for user core May 16 00:54:42.008721 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:39212.service: Deactivated successfully. May 16 00:54:42.009420 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:54:42.009942 systemd-logind[1206]: Session 11 logged out. Waiting for processes to exit. May 16 00:54:42.010625 systemd-logind[1206]: Removed session 11. May 16 00:54:47.011391 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:59438.service. May 16 00:54:47.055127 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 59438 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:47.056413 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:47.059997 systemd-logind[1206]: New session 12 of user core. May 16 00:54:47.060884 systemd[1]: Started session-12.scope. May 16 00:54:47.173739 sshd[3394]: pam_unix(sshd:session): session closed for user core May 16 00:54:47.176223 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:59438.service: Deactivated successfully. May 16 00:54:47.176955 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:54:47.177475 systemd-logind[1206]: Session 12 logged out. Waiting for processes to exit. May 16 00:54:47.178153 systemd-logind[1206]: Removed session 12. May 16 00:54:52.179062 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:59454.service. May 16 00:54:52.222459 sshd[3408]: Accepted publickey for core from 10.0.0.1 port 59454 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:52.223928 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:52.227462 systemd-logind[1206]: New session 13 of user core. May 16 00:54:52.227784 systemd[1]: Started session-13.scope. May 16 00:54:52.338735 sshd[3408]: pam_unix(sshd:session): session closed for user core May 16 00:54:52.342317 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:59468.service. May 16 00:54:52.342856 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:59454.service: Deactivated successfully. May 16 00:54:52.343581 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:54:52.344699 systemd-logind[1206]: Session 13 logged out. Waiting for processes to exit. May 16 00:54:52.349418 systemd-logind[1206]: Removed session 13. May 16 00:54:52.387884 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 59468 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:52.388990 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:52.392246 systemd-logind[1206]: New session 14 of user core. May 16 00:54:52.392634 systemd[1]: Started session-14.scope. May 16 00:54:52.593286 sshd[3420]: pam_unix(sshd:session): session closed for user core May 16 00:54:52.596764 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:34378.service. May 16 00:54:52.597252 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:59468.service: Deactivated successfully. May 16 00:54:52.597966 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:54:52.598589 systemd-logind[1206]: Session 14 logged out. Waiting for processes to exit. May 16 00:54:52.599724 systemd-logind[1206]: Removed session 14. May 16 00:54:52.641557 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 34378 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:52.642672 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:52.645823 systemd-logind[1206]: New session 15 of user core. May 16 00:54:52.646575 systemd[1]: Started session-15.scope. May 16 00:54:53.280583 sshd[3431]: pam_unix(sshd:session): session closed for user core May 16 00:54:53.283649 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:34384.service. May 16 00:54:53.284148 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:34378.service: Deactivated successfully. May 16 00:54:53.284912 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:54:53.285811 systemd-logind[1206]: Session 15 logged out. Waiting for processes to exit. May 16 00:54:53.287018 systemd-logind[1206]: Removed session 15. May 16 00:54:53.331660 sshd[3449]: Accepted publickey for core from 10.0.0.1 port 34384 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:53.332957 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:53.336079 systemd-logind[1206]: New session 16 of user core. May 16 00:54:53.336914 systemd[1]: Started session-16.scope. May 16 00:54:53.552397 sshd[3449]: pam_unix(sshd:session): session closed for user core May 16 00:54:53.555533 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:34394.service. May 16 00:54:53.556048 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:34384.service: Deactivated successfully. May 16 00:54:53.556685 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:54:53.559596 systemd-logind[1206]: Session 16 logged out. Waiting for processes to exit. May 16 00:54:53.561568 systemd-logind[1206]: Removed session 16. May 16 00:54:53.601229 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:53.602338 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:53.605626 systemd-logind[1206]: New session 17 of user core. May 16 00:54:53.606383 systemd[1]: Started session-17.scope. May 16 00:54:53.715823 sshd[3463]: pam_unix(sshd:session): session closed for user core May 16 00:54:53.718440 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:34394.service: Deactivated successfully. May 16 00:54:53.719134 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:54:53.719691 systemd-logind[1206]: Session 17 logged out. Waiting for processes to exit. May 16 00:54:53.720312 systemd-logind[1206]: Removed session 17. May 16 00:54:58.720115 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:34400.service. May 16 00:54:58.763549 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 34400 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:58.764703 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:58.768338 systemd-logind[1206]: New session 18 of user core. May 16 00:54:58.768762 systemd[1]: Started session-18.scope. May 16 00:54:58.877185 sshd[3480]: pam_unix(sshd:session): session closed for user core May 16 00:54:58.879704 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:34400.service: Deactivated successfully. May 16 00:54:58.880373 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:54:58.880891 systemd-logind[1206]: Session 18 logged out. Waiting for processes to exit. May 16 00:54:58.881615 systemd-logind[1206]: Removed session 18. May 16 00:55:03.882252 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:48454.service. May 16 00:55:03.925417 sshd[3496]: Accepted publickey for core from 10.0.0.1 port 48454 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:55:03.927023 sshd[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:55:03.930519 systemd-logind[1206]: New session 19 of user core. May 16 00:55:03.930944 systemd[1]: Started session-19.scope. May 16 00:55:04.037643 sshd[3496]: pam_unix(sshd:session): session closed for user core May 16 00:55:04.040341 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:48454.service: Deactivated successfully. May 16 00:55:04.041129 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:55:04.041614 systemd-logind[1206]: Session 19 logged out. Waiting for processes to exit. May 16 00:55:04.042257 systemd-logind[1206]: Removed session 19. May 16 00:55:08.613285 kubelet[1920]: E0516 00:55:08.613234 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:09.041945 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:48466.service. May 16 00:55:09.085235 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 48466 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:55:09.086622 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:55:09.089594 systemd-logind[1206]: New session 20 of user core. May 16 00:55:09.090373 systemd[1]: Started session-20.scope. May 16 00:55:09.197871 sshd[3512]: pam_unix(sshd:session): session closed for user core May 16 00:55:09.201413 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:48474.service. May 16 00:55:09.201930 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:48466.service: Deactivated successfully. May 16 00:55:09.202579 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:55:09.203065 systemd-logind[1206]: Session 20 logged out. Waiting for processes to exit. May 16 00:55:09.204051 systemd-logind[1206]: Removed session 20. May 16 00:55:09.247172 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 48474 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:55:09.248253 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:55:09.251519 systemd-logind[1206]: New session 21 of user core. May 16 00:55:09.252105 systemd[1]: Started session-21.scope. May 16 00:55:11.275524 env[1218]: time="2025-05-16T00:55:11.275463185Z" level=info msg="StopContainer for \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\" with timeout 30 (s)" May 16 00:55:11.275900 env[1218]: time="2025-05-16T00:55:11.275865697Z" level=info msg="Stop container \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\" with signal terminated" May 16 00:55:11.292545 systemd[1]: cri-containerd-c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e.scope: Deactivated successfully. May 16 00:55:11.310273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e-rootfs.mount: Deactivated successfully. May 16 00:55:11.317089 env[1218]: time="2025-05-16T00:55:11.317018086Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:55:11.319979 env[1218]: time="2025-05-16T00:55:11.319801788Z" level=info msg="shim disconnected" id=c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e May 16 00:55:11.319979 env[1218]: time="2025-05-16T00:55:11.319834388Z" level=warning msg="cleaning up after shim disconnected" id=c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e namespace=k8s.io May 16 00:55:11.319979 env[1218]: time="2025-05-16T00:55:11.319844507Z" level=info msg="cleaning up dead shim" May 16 00:55:11.322848 env[1218]: time="2025-05-16T00:55:11.322822206Z" level=info msg="StopContainer for \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\" with timeout 2 (s)" May 16 00:55:11.323116 env[1218]: time="2025-05-16T00:55:11.323094080Z" level=info msg="Stop container \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\" with signal terminated" May 16 00:55:11.327824 systemd-networkd[1045]: lxc_health: Link DOWN May 16 00:55:11.327830 systemd-networkd[1045]: lxc_health: Lost carrier May 16 00:55:11.328972 env[1218]: time="2025-05-16T00:55:11.328939679Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3575 runtime=io.containerd.runc.v2\n" May 16 00:55:11.331033 env[1218]: time="2025-05-16T00:55:11.330993237Z" level=info msg="StopContainer for \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\" returns successfully" May 16 00:55:11.331718 env[1218]: time="2025-05-16T00:55:11.331685903Z" level=info msg="StopPodSandbox for \"ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d\"" May 16 00:55:11.331761 env[1218]: time="2025-05-16T00:55:11.331746181Z" level=info msg="Container to stop \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.333389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d-shm.mount: Deactivated successfully. May 16 00:55:11.337936 systemd[1]: cri-containerd-ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d.scope: Deactivated successfully. May 16 00:55:11.361324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d-rootfs.mount: Deactivated successfully. May 16 00:55:11.365498 systemd[1]: cri-containerd-3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3.scope: Deactivated successfully. May 16 00:55:11.365813 systemd[1]: cri-containerd-3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3.scope: Consumed 6.350s CPU time. May 16 00:55:11.367698 env[1218]: time="2025-05-16T00:55:11.367654399Z" level=info msg="shim disconnected" id=ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d May 16 00:55:11.367698 env[1218]: time="2025-05-16T00:55:11.367699158Z" level=warning msg="cleaning up after shim disconnected" id=ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d namespace=k8s.io May 16 00:55:11.367895 env[1218]: time="2025-05-16T00:55:11.367708998Z" level=info msg="cleaning up dead shim" May 16 00:55:11.375575 env[1218]: time="2025-05-16T00:55:11.375515316Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3615 runtime=io.containerd.runc.v2\n" May 16 00:55:11.375885 env[1218]: time="2025-05-16T00:55:11.375850109Z" level=info msg="TearDown network for sandbox \"ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d\" successfully" May 16 00:55:11.375885 env[1218]: time="2025-05-16T00:55:11.375881869Z" level=info msg="StopPodSandbox for \"ecca840336fbd62a7ff357be14fcf2f81a6b9d5fd18329ab41c7eae223e8df9d\" returns successfully" May 16 00:55:11.384168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3-rootfs.mount: Deactivated successfully. May 16 00:55:11.392439 env[1218]: time="2025-05-16T00:55:11.392387007Z" level=info msg="shim disconnected" id=3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3 May 16 00:55:11.392439 env[1218]: time="2025-05-16T00:55:11.392429566Z" level=warning msg="cleaning up after shim disconnected" id=3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3 namespace=k8s.io May 16 00:55:11.392439 env[1218]: time="2025-05-16T00:55:11.392438806Z" level=info msg="cleaning up dead shim" May 16 00:55:11.398779 env[1218]: time="2025-05-16T00:55:11.398740196Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3639 runtime=io.containerd.runc.v2\n" May 16 00:55:11.400824 env[1218]: time="2025-05-16T00:55:11.400728355Z" level=info msg="StopContainer for \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\" returns successfully" May 16 00:55:11.401682 env[1218]: time="2025-05-16T00:55:11.401631456Z" level=info msg="StopPodSandbox for \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\"" May 16 00:55:11.401763 env[1218]: time="2025-05-16T00:55:11.401693415Z" level=info msg="Container to stop \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.401763 env[1218]: time="2025-05-16T00:55:11.401710135Z" level=info msg="Container to stop \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.401763 env[1218]: time="2025-05-16T00:55:11.401721254Z" level=info msg="Container to stop \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.401763 env[1218]: time="2025-05-16T00:55:11.401733094Z" level=info msg="Container to stop \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.401763 env[1218]: time="2025-05-16T00:55:11.401743254Z" level=info msg="Container to stop \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.406988 systemd[1]: cri-containerd-4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91.scope: Deactivated successfully. May 16 00:55:11.423587 env[1218]: time="2025-05-16T00:55:11.423538483Z" level=info msg="shim disconnected" id=4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91 May 16 00:55:11.423587 env[1218]: time="2025-05-16T00:55:11.423587282Z" level=warning msg="cleaning up after shim disconnected" id=4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91 namespace=k8s.io May 16 00:55:11.423809 env[1218]: time="2025-05-16T00:55:11.423602042Z" level=info msg="cleaning up dead shim" May 16 00:55:11.430531 env[1218]: time="2025-05-16T00:55:11.430487379Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3669 runtime=io.containerd.runc.v2\n" May 16 00:55:11.430807 env[1218]: time="2025-05-16T00:55:11.430780733Z" level=info msg="TearDown network for sandbox \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" successfully" May 16 00:55:11.430807 env[1218]: time="2025-05-16T00:55:11.430809293Z" level=info msg="StopPodSandbox for \"4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91\" returns successfully" May 16 00:55:11.527198 kubelet[1920]: I0516 00:55:11.527055 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-run\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527198 kubelet[1920]: I0516 00:55:11.527119 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86addf11-2228-4a20-b0c7-75c96eeb959d-cilium-config-path\") pod \"86addf11-2228-4a20-b0c7-75c96eeb959d\" (UID: \"86addf11-2228-4a20-b0c7-75c96eeb959d\") " May 16 00:55:11.527198 kubelet[1920]: I0516 00:55:11.527153 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-lib-modules\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527198 kubelet[1920]: I0516 00:55:11.527171 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7zsn\" (UniqueName: \"kubernetes.io/projected/86addf11-2228-4a20-b0c7-75c96eeb959d-kube-api-access-j7zsn\") pod \"86addf11-2228-4a20-b0c7-75c96eeb959d\" (UID: \"86addf11-2228-4a20-b0c7-75c96eeb959d\") " May 16 00:55:11.527198 kubelet[1920]: I0516 00:55:11.527189 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-etc-cni-netd\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527198 kubelet[1920]: I0516 00:55:11.527204 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-kernel\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527668 kubelet[1920]: I0516 00:55:11.527221 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-hubble-tls\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527668 kubelet[1920]: I0516 00:55:11.527236 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-xtables-lock\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527668 kubelet[1920]: I0516 00:55:11.527251 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-bpf-maps\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527668 kubelet[1920]: I0516 00:55:11.527267 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-config-path\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527668 kubelet[1920]: I0516 00:55:11.527280 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cni-path\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527668 kubelet[1920]: I0516 00:55:11.527296 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aded17ed-67fd-4df7-8183-3bab5437f867-clustermesh-secrets\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527799 kubelet[1920]: I0516 00:55:11.527312 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-hostproc\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527799 kubelet[1920]: I0516 00:55:11.527329 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psxgn\" (UniqueName: \"kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-kube-api-access-psxgn\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527799 kubelet[1920]: I0516 00:55:11.527344 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-cgroup\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.527799 kubelet[1920]: I0516 00:55:11.527358 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-net\") pod \"aded17ed-67fd-4df7-8183-3bab5437f867\" (UID: \"aded17ed-67fd-4df7-8183-3bab5437f867\") " May 16 00:55:11.528470 kubelet[1920]: I0516 00:55:11.528356 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.528470 kubelet[1920]: I0516 00:55:11.528411 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.529199 kubelet[1920]: I0516 00:55:11.528623 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.529199 kubelet[1920]: I0516 00:55:11.528678 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.530731 kubelet[1920]: I0516 00:55:11.530625 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:55:11.530731 kubelet[1920]: I0516 00:55:11.530682 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.530731 kubelet[1920]: I0516 00:55:11.530704 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cni-path" (OuterVolumeSpecName: "cni-path") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.530731 kubelet[1920]: I0516 00:55:11.530720 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-hostproc" (OuterVolumeSpecName: "hostproc") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.530731 kubelet[1920]: I0516 00:55:11.530734 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.530911 kubelet[1920]: I0516 00:55:11.530746 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.530911 kubelet[1920]: I0516 00:55:11.530778 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.530911 kubelet[1920]: I0516 00:55:11.530848 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86addf11-2228-4a20-b0c7-75c96eeb959d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86addf11-2228-4a20-b0c7-75c96eeb959d" (UID: "86addf11-2228-4a20-b0c7-75c96eeb959d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:55:11.534386 kubelet[1920]: I0516 00:55:11.534351 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86addf11-2228-4a20-b0c7-75c96eeb959d-kube-api-access-j7zsn" (OuterVolumeSpecName: "kube-api-access-j7zsn") pod "86addf11-2228-4a20-b0c7-75c96eeb959d" (UID: "86addf11-2228-4a20-b0c7-75c96eeb959d"). InnerVolumeSpecName "kube-api-access-j7zsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:11.534482 kubelet[1920]: I0516 00:55:11.534429 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-kube-api-access-psxgn" (OuterVolumeSpecName: "kube-api-access-psxgn") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "kube-api-access-psxgn". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:11.534553 kubelet[1920]: I0516 00:55:11.534529 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:11.534599 kubelet[1920]: I0516 00:55:11.534582 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aded17ed-67fd-4df7-8183-3bab5437f867-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aded17ed-67fd-4df7-8183-3bab5437f867" (UID: "aded17ed-67fd-4df7-8183-3bab5437f867"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:55:11.627614 kubelet[1920]: I0516 00:55:11.627561 1920 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627614 kubelet[1920]: I0516 00:55:11.627604 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627614 kubelet[1920]: I0516 00:55:11.627621 1920 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627636 1920 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aded17ed-67fd-4df7-8183-3bab5437f867-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627651 1920 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627665 1920 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-psxgn\" (UniqueName: \"kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-kube-api-access-psxgn\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627680 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627694 1920 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627708 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627718 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86addf11-2228-4a20-b0c7-75c96eeb959d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627807 kubelet[1920]: I0516 00:55:11.627726 1920 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627997 kubelet[1920]: I0516 00:55:11.627734 1920 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j7zsn\" (UniqueName: \"kubernetes.io/projected/86addf11-2228-4a20-b0c7-75c96eeb959d-kube-api-access-j7zsn\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627997 kubelet[1920]: I0516 00:55:11.627742 1920 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627997 kubelet[1920]: I0516 00:55:11.627750 1920 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627997 kubelet[1920]: I0516 00:55:11.627759 1920 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aded17ed-67fd-4df7-8183-3bab5437f867-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.627997 kubelet[1920]: I0516 00:55:11.627767 1920 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aded17ed-67fd-4df7-8183-3bab5437f867-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:55:11.762780 kubelet[1920]: I0516 00:55:11.762732 1920 scope.go:117] "RemoveContainer" containerID="3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3" May 16 00:55:11.765146 env[1218]: time="2025-05-16T00:55:11.765095980Z" level=info msg="RemoveContainer for \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\"" May 16 00:55:11.766370 systemd[1]: Removed slice kubepods-burstable-podaded17ed_67fd_4df7_8183_3bab5437f867.slice. May 16 00:55:11.766475 systemd[1]: kubepods-burstable-podaded17ed_67fd_4df7_8183_3bab5437f867.slice: Consumed 6.586s CPU time. May 16 00:55:11.770732 env[1218]: time="2025-05-16T00:55:11.770661945Z" level=info msg="RemoveContainer for \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\" returns successfully" May 16 00:55:11.771804 kubelet[1920]: I0516 00:55:11.771776 1920 scope.go:117] "RemoveContainer" containerID="93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be" May 16 00:55:11.773094 env[1218]: time="2025-05-16T00:55:11.773063135Z" level=info msg="RemoveContainer for \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\"" May 16 00:55:11.775598 env[1218]: time="2025-05-16T00:55:11.775458686Z" level=info msg="RemoveContainer for \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\" returns successfully" May 16 00:55:11.775800 kubelet[1920]: I0516 00:55:11.775782 1920 scope.go:117] "RemoveContainer" containerID="4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329" May 16 00:55:11.776250 systemd[1]: Removed slice kubepods-besteffort-pod86addf11_2228_4a20_b0c7_75c96eeb959d.slice. May 16 00:55:11.777637 env[1218]: time="2025-05-16T00:55:11.777565802Z" level=info msg="RemoveContainer for \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\"" May 16 00:55:11.782081 env[1218]: time="2025-05-16T00:55:11.781970311Z" level=info msg="RemoveContainer for \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\" returns successfully" May 16 00:55:11.782210 kubelet[1920]: I0516 00:55:11.782164 1920 scope.go:117] "RemoveContainer" containerID="482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5" May 16 00:55:11.784358 env[1218]: time="2025-05-16T00:55:11.783502599Z" level=info msg="RemoveContainer for \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\"" May 16 00:55:11.786382 env[1218]: time="2025-05-16T00:55:11.786192224Z" level=info msg="RemoveContainer for \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\" returns successfully" May 16 00:55:11.786472 kubelet[1920]: I0516 00:55:11.786377 1920 scope.go:117] "RemoveContainer" containerID="636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263" May 16 00:55:11.787341 env[1218]: time="2025-05-16T00:55:11.787317801Z" level=info msg="RemoveContainer for \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\"" May 16 00:55:11.789917 env[1218]: time="2025-05-16T00:55:11.789879068Z" level=info msg="RemoveContainer for \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\" returns successfully" May 16 00:55:11.790159 kubelet[1920]: I0516 00:55:11.790130 1920 scope.go:117] "RemoveContainer" containerID="3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3" May 16 00:55:11.790740 env[1218]: time="2025-05-16T00:55:11.790674651Z" level=error msg="ContainerStatus for \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\": not found" May 16 00:55:11.791235 kubelet[1920]: E0516 00:55:11.790838 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\": not found" containerID="3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3" May 16 00:55:11.791235 kubelet[1920]: I0516 00:55:11.790888 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3"} err="failed to get container status \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cdbe608b618ae1d9f0b32b07f5d39314c5b065d94d649782c523c91625359b3\": not found" May 16 00:55:11.791235 kubelet[1920]: I0516 00:55:11.791085 1920 scope.go:117] "RemoveContainer" containerID="93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be" May 16 00:55:11.792001 env[1218]: time="2025-05-16T00:55:11.791884146Z" level=error msg="ContainerStatus for \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\": not found" May 16 00:55:11.792096 kubelet[1920]: E0516 00:55:11.792049 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\": not found" containerID="93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be" May 16 00:55:11.792140 kubelet[1920]: I0516 00:55:11.792073 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be"} err="failed to get container status \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\": rpc error: code = NotFound desc = an error occurred when try to find container \"93d9c677e2bc0ce9f4ead1d3869f4ae31cd5426345309dc2124549ba69dbc8be\": not found" May 16 00:55:11.792140 kubelet[1920]: I0516 00:55:11.792125 1920 scope.go:117] "RemoveContainer" containerID="4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329" May 16 00:55:11.792342 env[1218]: time="2025-05-16T00:55:11.792299898Z" level=error msg="ContainerStatus for \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\": not found" May 16 00:55:11.793794 env[1218]: time="2025-05-16T00:55:11.792669810Z" level=error msg="ContainerStatus for \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\": not found" May 16 00:55:11.793794 env[1218]: time="2025-05-16T00:55:11.793157360Z" level=error msg="ContainerStatus for \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\": not found" May 16 00:55:11.793858 kubelet[1920]: E0516 00:55:11.792435 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\": not found" containerID="4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329" May 16 00:55:11.793858 kubelet[1920]: I0516 00:55:11.792484 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329"} err="failed to get container status \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\": rpc error: code = NotFound desc = an error occurred when try to find container \"4eaaabfea6c583cd427452688941ddfe781bee363ad8ad1b198934355b385329\": not found" May 16 00:55:11.793858 kubelet[1920]: I0516 00:55:11.792498 1920 scope.go:117] "RemoveContainer" containerID="482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5" May 16 00:55:11.793858 kubelet[1920]: E0516 00:55:11.792804 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\": not found" containerID="482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5" May 16 00:55:11.793858 kubelet[1920]: I0516 00:55:11.792871 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5"} err="failed to get container status \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\": rpc error: code = NotFound desc = an error occurred when try to find container \"482d95edee7ab972c0499dc38f903609136a997b2eb4fcd2bb915c94df733fc5\": not found" May 16 00:55:11.793858 kubelet[1920]: I0516 00:55:11.792910 1920 scope.go:117] "RemoveContainer" containerID="636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263" May 16 00:55:11.793991 kubelet[1920]: E0516 00:55:11.793287 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\": not found" containerID="636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263" May 16 00:55:11.793991 kubelet[1920]: I0516 00:55:11.793329 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263"} err="failed to get container status \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\": rpc error: code = NotFound desc = an error occurred when try to find container \"636ef1039c2762bcd8d9d0351fe6f1df504adf4b910be49347bbcc0af4d57263\": not found" May 16 00:55:11.793991 kubelet[1920]: I0516 00:55:11.793342 1920 scope.go:117] "RemoveContainer" containerID="c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e" May 16 00:55:11.794612 env[1218]: time="2025-05-16T00:55:11.794587730Z" level=info msg="RemoveContainer for \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\"" May 16 00:55:11.796959 env[1218]: time="2025-05-16T00:55:11.796926002Z" level=info msg="RemoveContainer for \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\" returns successfully" May 16 00:55:11.797164 kubelet[1920]: I0516 00:55:11.797112 1920 scope.go:117] "RemoveContainer" containerID="c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e" May 16 00:55:11.797384 env[1218]: time="2025-05-16T00:55:11.797338033Z" level=error msg="ContainerStatus for \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\": not found" May 16 00:55:11.797489 kubelet[1920]: E0516 00:55:11.797471 1920 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\": not found" containerID="c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e" May 16 00:55:11.797544 kubelet[1920]: I0516 00:55:11.797494 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e"} err="failed to get container status \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3784ebcb73a1016d87d8fbc3a0b4cd18e8e2fb3b788ee2008ca2131c849526e\": not found" May 16 00:55:12.282881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91-rootfs.mount: Deactivated successfully. May 16 00:55:12.282975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ddd7cb198622e03f225ab2488353a2132c497674579a7800ef8ed5b1cfb9a91-shm.mount: Deactivated successfully. May 16 00:55:12.283033 systemd[1]: var-lib-kubelet-pods-aded17ed\x2d67fd\x2d4df7\x2d8183\x2d3bab5437f867-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsxgn.mount: Deactivated successfully. May 16 00:55:12.283090 systemd[1]: var-lib-kubelet-pods-86addf11\x2d2228\x2d4a20\x2db0c7\x2d75c96eeb959d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj7zsn.mount: Deactivated successfully. May 16 00:55:12.283156 systemd[1]: var-lib-kubelet-pods-aded17ed\x2d67fd\x2d4df7\x2d8183\x2d3bab5437f867-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:55:12.283204 systemd[1]: var-lib-kubelet-pods-aded17ed\x2d67fd\x2d4df7\x2d8183\x2d3bab5437f867-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:55:12.613888 kubelet[1920]: I0516 00:55:12.613781 1920 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86addf11-2228-4a20-b0c7-75c96eeb959d" path="/var/lib/kubelet/pods/86addf11-2228-4a20-b0c7-75c96eeb959d/volumes" May 16 00:55:12.614557 kubelet[1920]: I0516 00:55:12.614531 1920 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aded17ed-67fd-4df7-8183-3bab5437f867" path="/var/lib/kubelet/pods/aded17ed-67fd-4df7-8183-3bab5437f867/volumes" May 16 00:55:13.240993 sshd[3525]: pam_unix(sshd:session): session closed for user core May 16 00:55:13.244510 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:48474.service: Deactivated successfully. May 16 00:55:13.245110 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:55:13.245270 systemd[1]: session-21.scope: Consumed 1.359s CPU time. May 16 00:55:13.245705 systemd-logind[1206]: Session 21 logged out. Waiting for processes to exit. May 16 00:55:13.246858 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:57610.service. May 16 00:55:13.247514 systemd-logind[1206]: Removed session 21. May 16 00:55:13.291541 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 57610 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:55:13.292765 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:55:13.295954 systemd-logind[1206]: New session 22 of user core. May 16 00:55:13.296803 systemd[1]: Started session-22.scope. May 16 00:55:14.662682 sshd[3689]: pam_unix(sshd:session): session closed for user core May 16 00:55:14.666848 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:57616.service. May 16 00:55:14.671765 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:57610.service: Deactivated successfully. May 16 00:55:14.672637 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:55:14.672824 systemd[1]: session-22.scope: Consumed 1.288s CPU time. May 16 00:55:14.673776 systemd-logind[1206]: Session 22 logged out. Waiting for processes to exit. May 16 00:55:14.674700 systemd-logind[1206]: Removed session 22. May 16 00:55:14.680319 systemd[1]: Created slice kubepods-burstable-pod927a83d4_46c3_457c_bf68_2aee4471120b.slice. May 16 00:55:14.723684 sshd[3701]: Accepted publickey for core from 10.0.0.1 port 57616 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:55:14.725274 sshd[3701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:55:14.728637 systemd-logind[1206]: New session 23 of user core. May 16 00:55:14.729473 systemd[1]: Started session-23.scope. May 16 00:55:14.743575 kubelet[1920]: I0516 00:55:14.743536 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-run\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743807 kubelet[1920]: I0516 00:55:14.743574 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-config-path\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743807 kubelet[1920]: I0516 00:55:14.743596 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-kernel\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743807 kubelet[1920]: I0516 00:55:14.743612 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-cgroup\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743807 kubelet[1920]: I0516 00:55:14.743628 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-clustermesh-secrets\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743807 kubelet[1920]: I0516 00:55:14.743641 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-hostproc\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743927 kubelet[1920]: I0516 00:55:14.743655 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-ipsec-secrets\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743927 kubelet[1920]: I0516 00:55:14.743671 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-hubble-tls\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743927 kubelet[1920]: I0516 00:55:14.743684 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-bpf-maps\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743927 kubelet[1920]: I0516 00:55:14.743697 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-xtables-lock\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743927 kubelet[1920]: I0516 00:55:14.743712 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-etc-cni-netd\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.743927 kubelet[1920]: I0516 00:55:14.743728 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsklp\" (UniqueName: \"kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-kube-api-access-hsklp\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.744051 kubelet[1920]: I0516 00:55:14.743743 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cni-path\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.744051 kubelet[1920]: I0516 00:55:14.743761 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-lib-modules\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.744051 kubelet[1920]: I0516 00:55:14.743774 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-net\") pod \"cilium-kz6s4\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " pod="kube-system/cilium-kz6s4" May 16 00:55:14.856779 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:57630.service. May 16 00:55:14.861184 sshd[3701]: pam_unix(sshd:session): session closed for user core May 16 00:55:14.863159 kubelet[1920]: E0516 00:55:14.861893 1920 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[clustermesh-secrets kube-api-access-hsklp], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-kz6s4" podUID="927a83d4-46c3-457c-bf68-2aee4471120b" May 16 00:55:14.867344 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:57616.service: Deactivated successfully. May 16 00:55:14.867969 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:55:14.871897 systemd-logind[1206]: Session 23 logged out. Waiting for processes to exit. May 16 00:55:14.874091 systemd-logind[1206]: Removed session 23. May 16 00:55:14.907097 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 57630 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:55:14.908314 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:55:14.911630 systemd-logind[1206]: New session 24 of user core. May 16 00:55:14.912480 systemd[1]: Started session-24.scope. May 16 00:55:15.634923 kubelet[1920]: E0516 00:55:15.634878 1920 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:55:15.852485 kubelet[1920]: I0516 00:55:15.852457 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-cgroup\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852485 kubelet[1920]: I0516 00:55:15.852489 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-hostproc\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852796 kubelet[1920]: I0516 00:55:15.852518 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cni-path\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852796 kubelet[1920]: I0516 00:55:15.852535 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-net\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852796 kubelet[1920]: I0516 00:55:15.852549 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-run\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852796 kubelet[1920]: I0516 00:55:15.852566 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-hubble-tls\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852796 kubelet[1920]: I0516 00:55:15.852585 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-ipsec-secrets\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852796 kubelet[1920]: I0516 00:55:15.852583 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.852942 kubelet[1920]: I0516 00:55:15.852599 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-bpf-maps\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852942 kubelet[1920]: I0516 00:55:15.852624 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.852942 kubelet[1920]: I0516 00:55:15.852645 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-hostproc" (OuterVolumeSpecName: "hostproc") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.852942 kubelet[1920]: I0516 00:55:15.852650 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-clustermesh-secrets\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.852942 kubelet[1920]: I0516 00:55:15.852658 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cni-path" (OuterVolumeSpecName: "cni-path") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.853063 kubelet[1920]: I0516 00:55:15.852667 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-etc-cni-netd\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.853063 kubelet[1920]: I0516 00:55:15.852671 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.853063 kubelet[1920]: I0516 00:55:15.852684 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.853063 kubelet[1920]: I0516 00:55:15.852686 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsklp\" (UniqueName: \"kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-kube-api-access-hsklp\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.853529 kubelet[1920]: I0516 00:55:15.853307 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-config-path\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.853529 kubelet[1920]: I0516 00:55:15.853356 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-kernel\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.853529 kubelet[1920]: I0516 00:55:15.853372 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-xtables-lock\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.853529 kubelet[1920]: I0516 00:55:15.853394 1920 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-lib-modules\") pod \"927a83d4-46c3-457c-bf68-2aee4471120b\" (UID: \"927a83d4-46c3-457c-bf68-2aee4471120b\") " May 16 00:55:15.853529 kubelet[1920]: I0516 00:55:15.853429 1920 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.853529 kubelet[1920]: I0516 00:55:15.853439 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.853529 kubelet[1920]: I0516 00:55:15.853460 1920 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.853726 kubelet[1920]: I0516 00:55:15.853469 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.853726 kubelet[1920]: I0516 00:55:15.853477 1920 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.853726 kubelet[1920]: I0516 00:55:15.853484 1920 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.853726 kubelet[1920]: I0516 00:55:15.853508 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.857834 kubelet[1920]: I0516 00:55:15.855180 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:55:15.857834 kubelet[1920]: I0516 00:55:15.855224 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.857834 kubelet[1920]: I0516 00:55:15.855241 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.857834 kubelet[1920]: I0516 00:55:15.855257 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.856562 systemd[1]: var-lib-kubelet-pods-927a83d4\x2d46c3\x2d457c\x2dbf68\x2d2aee4471120b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhsklp.mount: Deactivated successfully. May 16 00:55:15.858154 kubelet[1920]: I0516 00:55:15.855617 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-kube-api-access-hsklp" (OuterVolumeSpecName: "kube-api-access-hsklp") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "kube-api-access-hsklp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:15.858154 kubelet[1920]: I0516 00:55:15.857895 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:55:15.856648 systemd[1]: var-lib-kubelet-pods-927a83d4\x2d46c3\x2d457c\x2dbf68\x2d2aee4471120b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 16 00:55:15.858360 kubelet[1920]: I0516 00:55:15.858337 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:15.858633 systemd[1]: var-lib-kubelet-pods-927a83d4\x2d46c3\x2d457c\x2dbf68\x2d2aee4471120b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:55:15.859056 kubelet[1920]: I0516 00:55:15.859023 1920 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "927a83d4-46c3-457c-bf68-2aee4471120b" (UID: "927a83d4-46c3-457c-bf68-2aee4471120b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:55:15.860285 systemd[1]: var-lib-kubelet-pods-927a83d4\x2d46c3\x2d457c\x2dbf68\x2d2aee4471120b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:55:15.954386 kubelet[1920]: I0516 00:55:15.954303 1920 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.954517 kubelet[1920]: I0516 00:55:15.954501 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.954581 kubelet[1920]: I0516 00:55:15.954569 1920 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/927a83d4-46c3-457c-bf68-2aee4471120b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.954643 kubelet[1920]: I0516 00:55:15.954632 1920 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.954700 kubelet[1920]: I0516 00:55:15.954689 1920 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hsklp\" (UniqueName: \"kubernetes.io/projected/927a83d4-46c3-457c-bf68-2aee4471120b-kube-api-access-hsklp\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.954761 kubelet[1920]: I0516 00:55:15.954750 1920 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/927a83d4-46c3-457c-bf68-2aee4471120b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.955217 kubelet[1920]: I0516 00:55:15.955192 1920 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.955563 kubelet[1920]: I0516 00:55:15.955501 1920 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:55:15.955654 kubelet[1920]: I0516 00:55:15.955640 1920 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/927a83d4-46c3-457c-bf68-2aee4471120b-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:55:16.617880 systemd[1]: Removed slice kubepods-burstable-pod927a83d4_46c3_457c_bf68_2aee4471120b.slice. May 16 00:55:16.834912 systemd[1]: Created slice kubepods-burstable-pod57058691_f940_4ea5_bb92_4ed44d2dfbd8.slice. May 16 00:55:16.961158 kubelet[1920]: I0516 00:55:16.961003 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-cilium-run\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961158 kubelet[1920]: I0516 00:55:16.961054 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-bpf-maps\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961158 kubelet[1920]: I0516 00:55:16.961073 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-lib-modules\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961158 kubelet[1920]: I0516 00:55:16.961088 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/57058691-f940-4ea5-bb92-4ed44d2dfbd8-clustermesh-secrets\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961578 kubelet[1920]: I0516 00:55:16.961177 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/57058691-f940-4ea5-bb92-4ed44d2dfbd8-hubble-tls\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961578 kubelet[1920]: I0516 00:55:16.961203 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-host-proc-sys-kernel\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961578 kubelet[1920]: I0516 00:55:16.961219 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnm5\" (UniqueName: \"kubernetes.io/projected/57058691-f940-4ea5-bb92-4ed44d2dfbd8-kube-api-access-sxnm5\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961578 kubelet[1920]: I0516 00:55:16.961240 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-etc-cni-netd\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961578 kubelet[1920]: I0516 00:55:16.961253 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57058691-f940-4ea5-bb92-4ed44d2dfbd8-cilium-config-path\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961578 kubelet[1920]: I0516 00:55:16.961281 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-hostproc\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961721 kubelet[1920]: I0516 00:55:16.961298 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-cni-path\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961721 kubelet[1920]: I0516 00:55:16.961319 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-xtables-lock\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961721 kubelet[1920]: I0516 00:55:16.961336 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/57058691-f940-4ea5-bb92-4ed44d2dfbd8-cilium-ipsec-secrets\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961721 kubelet[1920]: I0516 00:55:16.961364 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-cilium-cgroup\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:16.961721 kubelet[1920]: I0516 00:55:16.961388 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/57058691-f940-4ea5-bb92-4ed44d2dfbd8-host-proc-sys-net\") pod \"cilium-xs5br\" (UID: \"57058691-f940-4ea5-bb92-4ed44d2dfbd8\") " pod="kube-system/cilium-xs5br" May 16 00:55:17.137230 kubelet[1920]: E0516 00:55:17.137198 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:17.139013 env[1218]: time="2025-05-16T00:55:17.138612594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xs5br,Uid:57058691-f940-4ea5-bb92-4ed44d2dfbd8,Namespace:kube-system,Attempt:0,}" May 16 00:55:17.158945 env[1218]: time="2025-05-16T00:55:17.158891791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:55:17.158945 env[1218]: time="2025-05-16T00:55:17.158933511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:55:17.159128 env[1218]: time="2025-05-16T00:55:17.158943831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:55:17.159128 env[1218]: time="2025-05-16T00:55:17.159079829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2 pid=3747 runtime=io.containerd.runc.v2 May 16 00:55:17.171950 systemd[1]: Started cri-containerd-605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2.scope. May 16 00:55:17.203004 env[1218]: time="2025-05-16T00:55:17.202962785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xs5br,Uid:57058691-f940-4ea5-bb92-4ed44d2dfbd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\"" May 16 00:55:17.203569 kubelet[1920]: E0516 00:55:17.203531 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:17.208487 env[1218]: time="2025-05-16T00:55:17.208432800Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:55:17.217565 env[1218]: time="2025-05-16T00:55:17.217477412Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b50d678477c0d9130aa1905119bf5dcd57cf69698c721ba7b7feedac015d88eb\"" May 16 00:55:17.218489 env[1218]: time="2025-05-16T00:55:17.218234443Z" level=info msg="StartContainer for \"b50d678477c0d9130aa1905119bf5dcd57cf69698c721ba7b7feedac015d88eb\"" May 16 00:55:17.230811 systemd[1]: Started cri-containerd-b50d678477c0d9130aa1905119bf5dcd57cf69698c721ba7b7feedac015d88eb.scope. May 16 00:55:17.261809 env[1218]: time="2025-05-16T00:55:17.260334860Z" level=info msg="StartContainer for \"b50d678477c0d9130aa1905119bf5dcd57cf69698c721ba7b7feedac015d88eb\" returns successfully" May 16 00:55:17.267635 systemd[1]: cri-containerd-b50d678477c0d9130aa1905119bf5dcd57cf69698c721ba7b7feedac015d88eb.scope: Deactivated successfully. May 16 00:55:17.293390 env[1218]: time="2025-05-16T00:55:17.293339266Z" level=info msg="shim disconnected" id=b50d678477c0d9130aa1905119bf5dcd57cf69698c721ba7b7feedac015d88eb May 16 00:55:17.293390 env[1218]: time="2025-05-16T00:55:17.293384265Z" level=warning msg="cleaning up after shim disconnected" id=b50d678477c0d9130aa1905119bf5dcd57cf69698c721ba7b7feedac015d88eb namespace=k8s.io May 16 00:55:17.293390 env[1218]: time="2025-05-16T00:55:17.293393985Z" level=info msg="cleaning up dead shim" May 16 00:55:17.300010 env[1218]: time="2025-05-16T00:55:17.299970187Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3833 runtime=io.containerd.runc.v2\n" May 16 00:55:17.779829 kubelet[1920]: E0516 00:55:17.779648 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:17.783118 env[1218]: time="2025-05-16T00:55:17.783045817Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:55:17.792388 env[1218]: time="2025-05-16T00:55:17.792330106Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e26f336efce3f1749211864be97e8b7a876ebdd5681169b62b9ba8b124ac0bf\"" May 16 00:55:17.792910 env[1218]: time="2025-05-16T00:55:17.792873420Z" level=info msg="StartContainer for \"3e26f336efce3f1749211864be97e8b7a876ebdd5681169b62b9ba8b124ac0bf\"" May 16 00:55:17.808863 systemd[1]: Started cri-containerd-3e26f336efce3f1749211864be97e8b7a876ebdd5681169b62b9ba8b124ac0bf.scope. May 16 00:55:17.837370 env[1218]: time="2025-05-16T00:55:17.837323809Z" level=info msg="StartContainer for \"3e26f336efce3f1749211864be97e8b7a876ebdd5681169b62b9ba8b124ac0bf\" returns successfully" May 16 00:55:17.845487 systemd[1]: cri-containerd-3e26f336efce3f1749211864be97e8b7a876ebdd5681169b62b9ba8b124ac0bf.scope: Deactivated successfully. May 16 00:55:17.862419 env[1218]: time="2025-05-16T00:55:17.862360470Z" level=info msg="shim disconnected" id=3e26f336efce3f1749211864be97e8b7a876ebdd5681169b62b9ba8b124ac0bf May 16 00:55:17.862419 env[1218]: time="2025-05-16T00:55:17.862400509Z" level=warning msg="cleaning up after shim disconnected" id=3e26f336efce3f1749211864be97e8b7a876ebdd5681169b62b9ba8b124ac0bf namespace=k8s.io May 16 00:55:17.862419 env[1218]: time="2025-05-16T00:55:17.862409349Z" level=info msg="cleaning up dead shim" May 16 00:55:17.871245 env[1218]: time="2025-05-16T00:55:17.871191684Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3896 runtime=io.containerd.runc.v2\n" May 16 00:55:18.614235 kubelet[1920]: I0516 00:55:18.614201 1920 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927a83d4-46c3-457c-bf68-2aee4471120b" path="/var/lib/kubelet/pods/927a83d4-46c3-457c-bf68-2aee4471120b/volumes" May 16 00:55:18.782870 kubelet[1920]: E0516 00:55:18.782845 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:18.786534 env[1218]: time="2025-05-16T00:55:18.786491575Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:55:18.797734 env[1218]: time="2025-05-16T00:55:18.797682856Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19\"" May 16 00:55:18.798357 env[1218]: time="2025-05-16T00:55:18.798321929Z" level=info msg="StartContainer for \"1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19\"" May 16 00:55:18.813542 systemd[1]: Started cri-containerd-1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19.scope. May 16 00:55:18.846308 env[1218]: time="2025-05-16T00:55:18.846258299Z" level=info msg="StartContainer for \"1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19\" returns successfully" May 16 00:55:18.849416 systemd[1]: cri-containerd-1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19.scope: Deactivated successfully. May 16 00:55:18.869726 env[1218]: time="2025-05-16T00:55:18.869416213Z" level=info msg="shim disconnected" id=1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19 May 16 00:55:18.869726 env[1218]: time="2025-05-16T00:55:18.869464372Z" level=warning msg="cleaning up after shim disconnected" id=1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19 namespace=k8s.io May 16 00:55:18.869726 env[1218]: time="2025-05-16T00:55:18.869473772Z" level=info msg="cleaning up dead shim" May 16 00:55:18.876649 env[1218]: time="2025-05-16T00:55:18.876608536Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3952 runtime=io.containerd.runc.v2\n" May 16 00:55:19.066496 systemd[1]: run-containerd-runc-k8s.io-1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19-runc.8oWTTh.mount: Deactivated successfully. May 16 00:55:19.066588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1eb79c95e68a547eaef0d30d732987bdd092ca50a058949a32d3c63b8eeaaa19-rootfs.mount: Deactivated successfully. May 16 00:55:19.786525 kubelet[1920]: E0516 00:55:19.786499 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:19.790912 env[1218]: time="2025-05-16T00:55:19.790864003Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:55:19.800935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568371384.mount: Deactivated successfully. May 16 00:55:19.809395 env[1218]: time="2025-05-16T00:55:19.809341669Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f\"" May 16 00:55:19.810033 env[1218]: time="2025-05-16T00:55:19.810008503Z" level=info msg="StartContainer for \"74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f\"" May 16 00:55:19.826056 systemd[1]: Started cri-containerd-74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f.scope. May 16 00:55:19.850798 systemd[1]: cri-containerd-74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f.scope: Deactivated successfully. May 16 00:55:19.851849 env[1218]: time="2025-05-16T00:55:19.851765751Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57058691_f940_4ea5_bb92_4ed44d2dfbd8.slice/cri-containerd-74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f.scope/memory.events\": no such file or directory" May 16 00:55:19.853593 env[1218]: time="2025-05-16T00:55:19.853548934Z" level=info msg="StartContainer for \"74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f\" returns successfully" May 16 00:55:19.871542 env[1218]: time="2025-05-16T00:55:19.871494766Z" level=info msg="shim disconnected" id=74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f May 16 00:55:19.871542 env[1218]: time="2025-05-16T00:55:19.871538846Z" level=warning msg="cleaning up after shim disconnected" id=74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f namespace=k8s.io May 16 00:55:19.871708 env[1218]: time="2025-05-16T00:55:19.871549886Z" level=info msg="cleaning up dead shim" May 16 00:55:19.878650 env[1218]: time="2025-05-16T00:55:19.878617939Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4007 runtime=io.containerd.runc.v2\n" May 16 00:55:20.066574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74ac38619b87fcf62f2444b6fbe5ad101b66b7f339389bdc40c76ef138953f8f-rootfs.mount: Deactivated successfully. May 16 00:55:20.635377 kubelet[1920]: E0516 00:55:20.635342 1920 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:55:20.791336 kubelet[1920]: E0516 00:55:20.791295 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:20.795303 env[1218]: time="2025-05-16T00:55:20.795245110Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:55:20.814779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525200969.mount: Deactivated successfully. May 16 00:55:20.819635 env[1218]: time="2025-05-16T00:55:20.819592511Z" level=info msg="CreateContainer within sandbox \"605fb682575ff4a94c001540187eca2659813ea1ab266d66c6742bcc787d4ac2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef2e54a06ac220fbe8b91366ff41bb92fa49e2d8230e7d5319c19070039aa2a1\"" May 16 00:55:20.820465 env[1218]: time="2025-05-16T00:55:20.820152627Z" level=info msg="StartContainer for \"ef2e54a06ac220fbe8b91366ff41bb92fa49e2d8230e7d5319c19070039aa2a1\"" May 16 00:55:20.834859 systemd[1]: Started cri-containerd-ef2e54a06ac220fbe8b91366ff41bb92fa49e2d8230e7d5319c19070039aa2a1.scope. May 16 00:55:20.869602 env[1218]: time="2025-05-16T00:55:20.869559064Z" level=info msg="StartContainer for \"ef2e54a06ac220fbe8b91366ff41bb92fa49e2d8230e7d5319c19070039aa2a1\" returns successfully" May 16 00:55:21.104470 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 16 00:55:21.794505 kubelet[1920]: E0516 00:55:21.794476 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:21.809221 kubelet[1920]: I0516 00:55:21.809162 1920 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xs5br" podStartSLOduration=5.809146512 podStartE2EDuration="5.809146512s" podCreationTimestamp="2025-05-16 00:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:55:21.8079832 +0000 UTC m=+81.299227187" watchObservedRunningTime="2025-05-16 00:55:21.809146512 +0000 UTC m=+81.300390499" May 16 00:55:21.876154 kubelet[1920]: I0516 00:55:21.876110 1920 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:55:21Z","lastTransitionTime":"2025-05-16T00:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:55:23.140923 kubelet[1920]: E0516 00:55:23.140878 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:23.260927 systemd[1]: run-containerd-runc-k8s.io-ef2e54a06ac220fbe8b91366ff41bb92fa49e2d8230e7d5319c19070039aa2a1-runc.DPCRnZ.mount: Deactivated successfully. May 16 00:55:23.844571 systemd-networkd[1045]: lxc_health: Link UP May 16 00:55:23.855986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:55:23.856173 systemd-networkd[1045]: lxc_health: Gained carrier May 16 00:55:25.141919 kubelet[1920]: E0516 00:55:25.141872 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:25.501531 systemd-networkd[1045]: lxc_health: Gained IPv6LL May 16 00:55:25.801916 kubelet[1920]: E0516 00:55:25.801802 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:26.802991 kubelet[1920]: E0516 00:55:26.802954 1920 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:27.506961 systemd[1]: run-containerd-runc-k8s.io-ef2e54a06ac220fbe8b91366ff41bb92fa49e2d8230e7d5319c19070039aa2a1-runc.aAodJj.mount: Deactivated successfully. May 16 00:55:29.623309 systemd[1]: run-containerd-runc-k8s.io-ef2e54a06ac220fbe8b91366ff41bb92fa49e2d8230e7d5319c19070039aa2a1-runc.00KikY.mount: Deactivated successfully. May 16 00:55:29.680902 sshd[3718]: pam_unix(sshd:session): session closed for user core May 16 00:55:29.684041 systemd-logind[1206]: Session 24 logged out. Waiting for processes to exit. May 16 00:55:29.684352 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:57630.service: Deactivated successfully. May 16 00:55:29.685036 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:55:29.686013 systemd-logind[1206]: Removed session 24.