May 17 00:38:58.865336 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:38:58.865362 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:38:58.865373 kernel: BIOS-provided physical RAM map: May 17 00:38:58.865381 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:38:58.865388 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:38:58.865395 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:38:58.865404 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:38:58.865412 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:38:58.865422 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:38:58.865430 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:38:58.865437 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:38:58.865445 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:38:58.865452 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:38:58.865460 kernel: NX (Execute Disable) protection: active May 17 00:38:58.865472 kernel: SMBIOS 2.8 present. May 17 00:38:58.865480 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:38:58.865489 kernel: Hypervisor detected: KVM May 17 00:38:58.865496 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:38:58.865504 kernel: kvm-clock: cpu 0, msr 7b19a001, primary cpu clock May 17 00:38:58.865513 kernel: kvm-clock: using sched offset of 2546014045 cycles May 17 00:38:58.865525 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:38:58.865536 kernel: tsc: Detected 2794.748 MHz processor May 17 00:38:58.865551 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:38:58.865569 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:38:58.865583 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:38:58.865598 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:38:58.865612 kernel: Using GB pages for direct mapping May 17 00:38:58.865628 kernel: ACPI: Early table checksum verification disabled May 17 00:38:58.865643 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:38:58.865658 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.865672 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.865684 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.865699 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:38:58.865712 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.865724 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.865758 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.865771 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:38:58.865795 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:38:58.865808 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:38:58.865822 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:38:58.865846 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:38:58.865859 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:38:58.865875 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:38:58.865890 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:38:58.865904 kernel: No NUMA configuration found May 17 00:38:58.865920 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:38:58.865937 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:38:58.866034 kernel: Zone ranges: May 17 00:38:58.866043 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:38:58.866052 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:38:58.866060 kernel: Normal empty May 17 00:38:58.866069 kernel: Movable zone start for each node May 17 00:38:58.866078 kernel: Early memory node ranges May 17 00:38:58.866086 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:38:58.866095 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:38:58.866122 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:38:58.866134 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:38:58.866142 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:38:58.866151 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:38:58.866160 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:38:58.866169 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:38:58.866178 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:38:58.866187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:38:58.866204 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:38:58.866214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:38:58.866224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:38:58.866233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:38:58.866242 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:38:58.866251 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:38:58.866260 kernel: TSC deadline timer available May 17 00:38:58.866269 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:38:58.866277 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:38:58.866286 kernel: kvm-guest: setup PV sched yield May 17 00:38:58.866295 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:38:58.866305 kernel: Booting paravirtualized kernel on KVM May 17 00:38:58.866314 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:38:58.866323 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 17 00:38:58.866331 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 17 00:38:58.866339 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 17 00:38:58.866347 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:38:58.866355 kernel: kvm-guest: setup async PF for cpu 0 May 17 00:38:58.866364 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 17 00:38:58.866373 kernel: kvm-guest: PV spinlocks enabled May 17 00:38:58.866383 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:38:58.866392 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:38:58.866401 kernel: Policy zone: DMA32 May 17 00:38:58.866411 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:38:58.866421 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:38:58.866430 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:38:58.866439 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:38:58.866448 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:38:58.866470 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) May 17 00:38:58.866479 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:38:58.866488 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:38:58.866497 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:38:58.866505 kernel: rcu: Hierarchical RCU implementation. May 17 00:38:58.866514 kernel: rcu: RCU event tracing is enabled. May 17 00:38:58.866523 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:38:58.866532 kernel: Rude variant of Tasks RCU enabled. May 17 00:38:58.866540 kernel: Tracing variant of Tasks RCU enabled. May 17 00:38:58.866551 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:38:58.866560 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:38:58.866568 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:38:58.866577 kernel: random: crng init done May 17 00:38:58.866585 kernel: Console: colour VGA+ 80x25 May 17 00:38:58.866594 kernel: printk: console [ttyS0] enabled May 17 00:38:58.866603 kernel: ACPI: Core revision 20210730 May 17 00:38:58.866612 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:38:58.866620 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:38:58.866630 kernel: x2apic enabled May 17 00:38:58.866639 kernel: Switched APIC routing to physical x2apic. May 17 00:38:58.866648 kernel: kvm-guest: setup PV IPIs May 17 00:38:58.866657 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:38:58.866666 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:38:58.866675 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:38:58.866684 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:38:58.866693 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:38:58.866702 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:38:58.866719 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:38:58.866728 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:38:58.866737 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:38:58.866748 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:38:58.866758 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:38:58.866768 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:38:58.866777 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:38:58.866801 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:38:58.866811 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:38:58.866821 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:38:58.866831 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:38:58.866841 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:38:58.866850 kernel: Freeing SMP alternatives memory: 32K May 17 00:38:58.866859 kernel: pid_max: default: 32768 minimum: 301 May 17 00:38:58.866869 kernel: LSM: Security Framework initializing May 17 00:38:58.866879 kernel: SELinux: Initializing. May 17 00:38:58.866889 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:38:58.866901 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:38:58.866911 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:38:58.866920 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:38:58.866930 kernel: ... version: 0 May 17 00:38:58.866940 kernel: ... bit width: 48 May 17 00:38:58.866949 kernel: ... generic registers: 6 May 17 00:38:58.866959 kernel: ... value mask: 0000ffffffffffff May 17 00:38:58.866969 kernel: ... max period: 00007fffffffffff May 17 00:38:58.866978 kernel: ... fixed-purpose events: 0 May 17 00:38:58.866990 kernel: ... event mask: 000000000000003f May 17 00:38:58.866999 kernel: signal: max sigframe size: 1776 May 17 00:38:58.867009 kernel: rcu: Hierarchical SRCU implementation. May 17 00:38:58.867019 kernel: smp: Bringing up secondary CPUs ... May 17 00:38:58.867029 kernel: x86: Booting SMP configuration: May 17 00:38:58.867038 kernel: .... node #0, CPUs: #1 May 17 00:38:58.867048 kernel: kvm-clock: cpu 1, msr 7b19a041, secondary cpu clock May 17 00:38:58.867058 kernel: kvm-guest: setup async PF for cpu 1 May 17 00:38:58.867067 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 17 00:38:58.867079 kernel: #2 May 17 00:38:58.867089 kernel: kvm-clock: cpu 2, msr 7b19a081, secondary cpu clock May 17 00:38:58.867141 kernel: kvm-guest: setup async PF for cpu 2 May 17 00:38:58.867152 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 17 00:38:58.867161 kernel: #3 May 17 00:38:58.867171 kernel: kvm-clock: cpu 3, msr 7b19a0c1, secondary cpu clock May 17 00:38:58.867181 kernel: kvm-guest: setup async PF for cpu 3 May 17 00:38:58.867191 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 17 00:38:58.867200 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:38:58.867210 kernel: smpboot: Max logical packages: 1 May 17 00:38:58.867223 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:38:58.867232 kernel: devtmpfs: initialized May 17 00:38:58.867242 kernel: x86/mm: Memory block size: 128MB May 17 00:38:58.867252 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:38:58.867262 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:38:58.867272 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:38:58.867282 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:38:58.867291 kernel: audit: initializing netlink subsys (disabled) May 17 00:38:58.867301 kernel: audit: type=2000 audit(1747442338.118:1): state=initialized audit_enabled=0 res=1 May 17 00:38:58.867313 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:38:58.867323 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:38:58.867333 kernel: cpuidle: using governor menu May 17 00:38:58.867343 kernel: ACPI: bus type PCI registered May 17 00:38:58.867352 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:38:58.867362 kernel: dca service started, version 1.12.1 May 17 00:38:58.867372 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:38:58.867382 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 17 00:38:58.867392 kernel: PCI: Using configuration type 1 for base access May 17 00:38:58.867404 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:38:58.867414 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:38:58.867424 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:38:58.867434 kernel: ACPI: Added _OSI(Module Device) May 17 00:38:58.867443 kernel: ACPI: Added _OSI(Processor Device) May 17 00:38:58.867453 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:38:58.867462 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:38:58.867472 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:38:58.867482 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:38:58.867494 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:38:58.867503 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:38:58.867513 kernel: ACPI: Interpreter enabled May 17 00:38:58.867522 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:38:58.867532 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:38:58.867542 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:38:58.867554 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:38:58.867565 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:38:58.867728 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:38:58.867854 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:38:58.867956 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:38:58.867970 kernel: PCI host bridge to bus 0000:00 May 17 00:38:58.868079 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:38:58.868190 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:38:58.868282 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:38:58.868381 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:38:58.868497 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:38:58.868618 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:38:58.868745 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:38:58.868934 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:38:58.869071 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:38:58.869210 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:38:58.869303 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:38:58.869395 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:38:58.869485 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:38:58.869594 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:38:58.869695 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:38:58.869814 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:38:58.869921 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:38:58.870031 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:38:58.870178 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:38:58.870285 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:38:58.870384 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:38:58.870493 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:38:58.870602 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:38:58.870708 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:38:58.870823 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:38:58.870928 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:38:58.871043 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:38:58.871188 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:38:58.871331 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:38:58.871476 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:38:58.871618 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:38:58.871807 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:38:58.871912 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:38:58.871927 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:38:58.871938 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:38:58.871948 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:38:58.871958 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:38:58.871971 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:38:58.871982 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:38:58.871992 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:38:58.872002 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:38:58.872012 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:38:58.872022 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:38:58.872032 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:38:58.872042 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:38:58.872052 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:38:58.872065 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:38:58.872075 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:38:58.872085 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:38:58.872095 kernel: iommu: Default domain type: Translated May 17 00:38:58.872119 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:38:58.872216 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:38:58.872308 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:38:58.872400 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:38:58.872413 kernel: vgaarb: loaded May 17 00:38:58.872426 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:38:58.872436 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:38:58.872446 kernel: PTP clock support registered May 17 00:38:58.872456 kernel: PCI: Using ACPI for IRQ routing May 17 00:38:58.872465 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:38:58.872475 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:38:58.872484 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:38:58.872493 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:38:58.872502 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:38:58.872513 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:38:58.872523 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:38:58.872532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:38:58.872543 kernel: pnp: PnP ACPI init May 17 00:38:58.872661 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:38:58.872675 kernel: pnp: PnP ACPI: found 6 devices May 17 00:38:58.872685 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:38:58.872694 kernel: NET: Registered PF_INET protocol family May 17 00:38:58.872706 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:38:58.872716 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:38:58.872725 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:38:58.872735 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:38:58.872744 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:38:58.872754 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:38:58.872763 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:38:58.872772 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:38:58.872791 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:38:58.872802 kernel: NET: Registered PF_XDP protocol family May 17 00:38:58.872885 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:38:58.872963 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:38:58.873043 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:38:58.873147 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:38:58.873234 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:38:58.873317 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:38:58.873329 kernel: PCI: CLS 0 bytes, default 64 May 17 00:38:58.873342 kernel: Initialise system trusted keyrings May 17 00:38:58.873352 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:38:58.873361 kernel: Key type asymmetric registered May 17 00:38:58.873370 kernel: Asymmetric key parser 'x509' registered May 17 00:38:58.873380 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:38:58.873389 kernel: io scheduler mq-deadline registered May 17 00:38:58.873398 kernel: io scheduler kyber registered May 17 00:38:58.873408 kernel: io scheduler bfq registered May 17 00:38:58.873417 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:38:58.873428 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:38:58.873438 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:38:58.873447 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:38:58.873457 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:38:58.873466 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:38:58.873476 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:38:58.873485 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:38:58.873494 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:38:58.873598 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:38:58.873616 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:38:58.873707 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:38:58.873813 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:38:58 UTC (1747442338) May 17 00:38:58.873910 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:38:58.873925 kernel: NET: Registered PF_INET6 protocol family May 17 00:38:58.873935 kernel: Segment Routing with IPv6 May 17 00:38:58.873944 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:38:58.873954 kernel: NET: Registered PF_PACKET protocol family May 17 00:38:58.873968 kernel: Key type dns_resolver registered May 17 00:38:58.873978 kernel: IPI shorthand broadcast: enabled May 17 00:38:58.873988 kernel: sched_clock: Marking stable (428287391, 104297635)->(617719825, -85134799) May 17 00:38:58.873998 kernel: registered taskstats version 1 May 17 00:38:58.874008 kernel: Loading compiled-in X.509 certificates May 17 00:38:58.874018 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:38:58.874028 kernel: Key type .fscrypt registered May 17 00:38:58.874038 kernel: Key type fscrypt-provisioning registered May 17 00:38:58.874048 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:38:58.874061 kernel: ima: Allocated hash algorithm: sha1 May 17 00:38:58.874071 kernel: ima: No architecture policies found May 17 00:38:58.874080 kernel: clk: Disabling unused clocks May 17 00:38:58.874091 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:38:58.874115 kernel: Write protecting the kernel read-only data: 28672k May 17 00:38:58.874126 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:38:58.874136 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:38:58.874145 kernel: Run /init as init process May 17 00:38:58.874155 kernel: with arguments: May 17 00:38:58.874168 kernel: /init May 17 00:38:58.874178 kernel: with environment: May 17 00:38:58.874188 kernel: HOME=/ May 17 00:38:58.874198 kernel: TERM=linux May 17 00:38:58.874208 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:38:58.874221 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:38:58.874234 systemd[1]: Detected virtualization kvm. May 17 00:38:58.874248 systemd[1]: Detected architecture x86-64. May 17 00:38:58.874258 systemd[1]: Running in initrd. May 17 00:38:58.874269 systemd[1]: No hostname configured, using default hostname. May 17 00:38:58.874279 systemd[1]: Hostname set to . May 17 00:38:58.874290 systemd[1]: Initializing machine ID from VM UUID. May 17 00:38:58.874301 systemd[1]: Queued start job for default target initrd.target. May 17 00:38:58.874312 systemd[1]: Started systemd-ask-password-console.path. May 17 00:38:58.874322 systemd[1]: Reached target cryptsetup.target. May 17 00:38:58.874333 systemd[1]: Reached target paths.target. May 17 00:38:58.874346 systemd[1]: Reached target slices.target. May 17 00:38:58.874366 systemd[1]: Reached target swap.target. May 17 00:38:58.874378 systemd[1]: Reached target timers.target. May 17 00:38:58.874390 systemd[1]: Listening on iscsid.socket. May 17 00:38:58.874401 systemd[1]: Listening on iscsiuio.socket. May 17 00:38:58.874414 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:38:58.874425 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:38:58.874436 systemd[1]: Listening on systemd-journald.socket. May 17 00:38:58.874447 systemd[1]: Listening on systemd-networkd.socket. May 17 00:38:58.874458 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:38:58.874469 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:38:58.874480 systemd[1]: Reached target sockets.target. May 17 00:38:58.874491 systemd[1]: Starting kmod-static-nodes.service... May 17 00:38:58.874501 systemd[1]: Finished network-cleanup.service. May 17 00:38:58.874514 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:38:58.874525 systemd[1]: Starting systemd-journald.service... May 17 00:38:58.874536 systemd[1]: Starting systemd-modules-load.service... May 17 00:38:58.874547 systemd[1]: Starting systemd-resolved.service... May 17 00:38:58.874558 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:38:58.874569 systemd[1]: Finished kmod-static-nodes.service. May 17 00:38:58.874580 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:38:58.874592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:38:58.874604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:38:58.874624 systemd-journald[198]: Journal started May 17 00:38:58.874693 systemd-journald[198]: Runtime Journal (/run/log/journal/cf38f2814328459f9ffde259924f5a7d) is 6.0M, max 48.5M, 42.5M free. May 17 00:38:58.866481 systemd-modules-load[199]: Inserted module 'overlay' May 17 00:38:58.907898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:38:58.907927 kernel: audit: type=1130 audit(1747442338.898:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.907941 systemd[1]: Started systemd-journald.service. May 17 00:38:58.907956 kernel: audit: type=1130 audit(1747442338.903:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.887946 systemd-resolved[200]: Positive Trust Anchors: May 17 00:38:58.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.887954 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:38:58.916077 kernel: audit: type=1130 audit(1747442338.908:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.916095 kernel: audit: type=1130 audit(1747442338.912:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.887982 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:38:58.890156 systemd-resolved[200]: Defaulting to hostname 'linux'. May 17 00:38:58.906282 systemd[1]: Started systemd-resolved.service. May 17 00:38:58.908745 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:38:58.913076 systemd[1]: Reached target nss-lookup.target. May 17 00:38:58.916945 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:38:58.928894 systemd-modules-load[199]: Inserted module 'br_netfilter' May 17 00:38:58.929791 kernel: Bridge firewalling registered May 17 00:38:58.934895 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:38:58.939319 kernel: audit: type=1130 audit(1747442338.934:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.935678 systemd[1]: Starting dracut-cmdline.service... May 17 00:38:58.944464 dracut-cmdline[215]: dracut-dracut-053 May 17 00:38:58.946570 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:38:58.957118 kernel: SCSI subsystem initialized May 17 00:38:58.968169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:38:58.968221 kernel: device-mapper: uevent: version 1.0.3 May 17 00:38:58.968235 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:38:58.972114 systemd-modules-load[199]: Inserted module 'dm_multipath' May 17 00:38:58.973611 systemd[1]: Finished systemd-modules-load.service. May 17 00:38:58.978183 kernel: audit: type=1130 audit(1747442338.973:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.974276 systemd[1]: Starting systemd-sysctl.service... May 17 00:38:58.983413 systemd[1]: Finished systemd-sysctl.service. May 17 00:38:58.987662 kernel: audit: type=1130 audit(1747442338.983:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:58.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.014128 kernel: Loading iSCSI transport class v2.0-870. May 17 00:38:59.087146 kernel: iscsi: registered transport (tcp) May 17 00:38:59.109496 kernel: iscsi: registered transport (qla4xxx) May 17 00:38:59.109560 kernel: QLogic iSCSI HBA Driver May 17 00:38:59.137155 systemd[1]: Finished dracut-cmdline.service. May 17 00:38:59.142487 kernel: audit: type=1130 audit(1747442339.137:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.138671 systemd[1]: Starting dracut-pre-udev.service... May 17 00:38:59.183129 kernel: raid6: avx2x4 gen() 27191 MB/s May 17 00:38:59.200118 kernel: raid6: avx2x4 xor() 7785 MB/s May 17 00:38:59.217127 kernel: raid6: avx2x2 gen() 31857 MB/s May 17 00:38:59.234115 kernel: raid6: avx2x2 xor() 17681 MB/s May 17 00:38:59.251115 kernel: raid6: avx2x1 gen() 25521 MB/s May 17 00:38:59.268119 kernel: raid6: avx2x1 xor() 14545 MB/s May 17 00:38:59.285117 kernel: raid6: sse2x4 gen() 14291 MB/s May 17 00:38:59.302129 kernel: raid6: sse2x4 xor() 6849 MB/s May 17 00:38:59.319134 kernel: raid6: sse2x2 gen() 15052 MB/s May 17 00:38:59.336125 kernel: raid6: sse2x2 xor() 8988 MB/s May 17 00:38:59.353128 kernel: raid6: sse2x1 gen() 11737 MB/s May 17 00:38:59.370636 kernel: raid6: sse2x1 xor() 7196 MB/s May 17 00:38:59.370662 kernel: raid6: using algorithm avx2x2 gen() 31857 MB/s May 17 00:38:59.370673 kernel: raid6: .... xor() 17681 MB/s, rmw enabled May 17 00:38:59.371399 kernel: raid6: using avx2x2 recovery algorithm May 17 00:38:59.385125 kernel: xor: automatically using best checksumming function avx May 17 00:38:59.479140 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:38:59.485616 systemd[1]: Finished dracut-pre-udev.service. May 17 00:38:59.490346 kernel: audit: type=1130 audit(1747442339.486:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.490000 audit: BPF prog-id=7 op=LOAD May 17 00:38:59.490000 audit: BPF prog-id=8 op=LOAD May 17 00:38:59.490599 systemd[1]: Starting systemd-udevd.service... May 17 00:38:59.503723 systemd-udevd[399]: Using default interface naming scheme 'v252'. May 17 00:38:59.507927 systemd[1]: Started systemd-udevd.service. May 17 00:38:59.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.509697 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:38:59.518685 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation May 17 00:38:59.538928 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:38:59.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.540471 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:38:59.570838 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:38:59.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:38:59.605497 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:38:59.611411 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:38:59.611433 kernel: GPT:9289727 != 19775487 May 17 00:38:59.611451 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:38:59.611463 kernel: GPT:9289727 != 19775487 May 17 00:38:59.611474 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:38:59.611485 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:38:59.611496 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:38:59.617123 kernel: libata version 3.00 loaded. May 17 00:38:59.627602 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:38:59.664154 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:38:59.664179 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:38:59.664280 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:38:59.664356 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:38:59.664366 kernel: AES CTR mode by8 optimization enabled May 17 00:38:59.664374 kernel: scsi host0: ahci May 17 00:38:59.664466 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) May 17 00:38:59.664476 kernel: scsi host1: ahci May 17 00:38:59.664556 kernel: scsi host2: ahci May 17 00:38:59.664640 kernel: scsi host3: ahci May 17 00:38:59.664721 kernel: scsi host4: ahci May 17 00:38:59.664827 kernel: scsi host5: ahci May 17 00:38:59.664909 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 17 00:38:59.664919 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 17 00:38:59.664927 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 17 00:38:59.664938 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 17 00:38:59.664947 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 17 00:38:59.664956 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 17 00:38:59.641453 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:38:59.692040 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:38:59.693289 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:38:59.705225 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:38:59.709975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:38:59.712467 systemd[1]: Starting disk-uuid.service... May 17 00:38:59.724219 disk-uuid[538]: Primary Header is updated. May 17 00:38:59.724219 disk-uuid[538]: Secondary Entries is updated. May 17 00:38:59.724219 disk-uuid[538]: Secondary Header is updated. May 17 00:38:59.728130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:38:59.733130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:38:59.975781 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:38:59.975852 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:38:59.975862 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:38:59.977429 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:38:59.978114 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:38:59.979127 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:38:59.980125 kernel: ata3.00: applying bridge limits May 17 00:38:59.981132 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:38:59.981153 kernel: ata3.00: configured for UDMA/100 May 17 00:38:59.982129 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:39:00.013520 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:39:00.030632 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:39:00.030650 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:39:00.753544 disk-uuid[539]: The operation has completed successfully. May 17 00:39:00.755006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:39:00.779903 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:39:00.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.779983 systemd[1]: Finished disk-uuid.service. May 17 00:39:00.786271 systemd[1]: Starting verity-setup.service... May 17 00:39:00.798136 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:39:00.819436 systemd[1]: Found device dev-mapper-usr.device. May 17 00:39:00.822170 systemd[1]: Mounting sysusr-usr.mount... May 17 00:39:00.823999 systemd[1]: Finished verity-setup.service. May 17 00:39:00.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.883829 systemd[1]: Mounted sysusr-usr.mount. May 17 00:39:00.885430 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:39:00.884746 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:39:00.885427 systemd[1]: Starting ignition-setup.service... May 17 00:39:00.886707 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:39:00.898266 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:39:00.898312 kernel: BTRFS info (device vda6): using free space tree May 17 00:39:00.898325 kernel: BTRFS info (device vda6): has skinny extents May 17 00:39:00.906115 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:39:00.948933 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:39:00.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.950000 audit: BPF prog-id=9 op=LOAD May 17 00:39:00.951430 systemd[1]: Starting systemd-networkd.service... May 17 00:39:00.961669 systemd[1]: Finished ignition-setup.service. May 17 00:39:00.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.962568 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:39:00.973778 systemd-networkd[712]: lo: Link UP May 17 00:39:00.973788 systemd-networkd[712]: lo: Gained carrier May 17 00:39:00.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.974252 systemd-networkd[712]: Enumeration completed May 17 00:39:00.974324 systemd[1]: Started systemd-networkd.service. May 17 00:39:00.974595 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:39:00.975872 systemd[1]: Reached target network.target. May 17 00:39:00.976426 systemd-networkd[712]: eth0: Link UP May 17 00:39:00.976429 systemd-networkd[712]: eth0: Gained carrier May 17 00:39:00.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.978154 systemd[1]: Starting iscsiuio.service... May 17 00:39:00.982294 systemd[1]: Started iscsiuio.service. May 17 00:39:00.984441 systemd[1]: Starting iscsid.service... May 17 00:39:00.989898 iscsid[719]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:39:00.989898 iscsid[719]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:39:00.989898 iscsid[719]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:39:00.989898 iscsid[719]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:39:00.989898 iscsid[719]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:39:00.989898 iscsid[719]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:39:00.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:00.988530 systemd[1]: Started iscsid.service. May 17 00:39:00.990573 systemd[1]: Starting dracut-initqueue.service... May 17 00:39:01.000956 systemd[1]: Finished dracut-initqueue.service. May 17 00:39:01.001186 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:39:01.002658 systemd[1]: Reached target remote-fs-pre.target. May 17 00:39:01.005378 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:39:01.008038 systemd[1]: Reached target remote-fs.target. May 17 00:39:01.009641 systemd[1]: Starting dracut-pre-mount.service... May 17 00:39:01.017638 systemd[1]: Finished dracut-pre-mount.service. May 17 00:39:01.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.021242 ignition[714]: Ignition 2.14.0 May 17 00:39:01.021254 ignition[714]: Stage: fetch-offline May 17 00:39:01.021328 ignition[714]: no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.021337 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.021436 ignition[714]: parsed url from cmdline: "" May 17 00:39:01.021439 ignition[714]: no config URL provided May 17 00:39:01.021443 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:39:01.021449 ignition[714]: no config at "/usr/lib/ignition/user.ign" May 17 00:39:01.021467 ignition[714]: op(1): [started] loading QEMU firmware config module May 17 00:39:01.021471 ignition[714]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:39:01.025381 ignition[714]: op(1): [finished] loading QEMU firmware config module May 17 00:39:01.068447 ignition[714]: parsing config with SHA512: acc62bbf21f8173e0348c4480fe4e7900d5b249c54bac2dc31bc529d19ec983dcf98457b9d211f3d11d2a3d192b92ec9fb6cc4a5bfe1c5a6debcf47e170fd2a8 May 17 00:39:01.074567 unknown[714]: fetched base config from "system" May 17 00:39:01.074584 unknown[714]: fetched user config from "qemu" May 17 00:39:01.076657 ignition[714]: fetch-offline: fetch-offline passed May 17 00:39:01.077587 ignition[714]: Ignition finished successfully May 17 00:39:01.079253 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:39:01.080934 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.137 May 17 00:39:01.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.080952 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. May 17 00:39:01.081279 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:39:01.082220 systemd[1]: Starting ignition-kargs.service... May 17 00:39:01.094065 ignition[740]: Ignition 2.14.0 May 17 00:39:01.094075 ignition[740]: Stage: kargs May 17 00:39:01.094178 ignition[740]: no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.094187 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.096850 systemd[1]: Finished ignition-kargs.service. May 17 00:39:01.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.095079 ignition[740]: kargs: kargs passed May 17 00:39:01.099372 systemd[1]: Starting ignition-disks.service... May 17 00:39:01.095126 ignition[740]: Ignition finished successfully May 17 00:39:01.106187 ignition[746]: Ignition 2.14.0 May 17 00:39:01.106199 ignition[746]: Stage: disks May 17 00:39:01.106287 ignition[746]: no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.107828 systemd[1]: Finished ignition-disks.service. May 17 00:39:01.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.106296 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.109565 systemd[1]: Reached target initrd-root-device.target. May 17 00:39:01.107236 ignition[746]: disks: disks passed May 17 00:39:01.111259 systemd[1]: Reached target local-fs-pre.target. May 17 00:39:01.107270 ignition[746]: Ignition finished successfully May 17 00:39:01.112149 systemd[1]: Reached target local-fs.target. May 17 00:39:01.114039 systemd[1]: Reached target sysinit.target. May 17 00:39:01.115694 systemd[1]: Reached target basic.target. May 17 00:39:01.117047 systemd[1]: Starting systemd-fsck-root.service... May 17 00:39:01.129843 systemd-fsck[754]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:39:01.146738 systemd[1]: Finished systemd-fsck-root.service. May 17 00:39:01.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.148352 systemd[1]: Mounting sysroot.mount... May 17 00:39:01.154119 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:39:01.154423 systemd[1]: Mounted sysroot.mount. May 17 00:39:01.155211 systemd[1]: Reached target initrd-root-fs.target. May 17 00:39:01.157489 systemd[1]: Mounting sysroot-usr.mount... May 17 00:39:01.158677 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:39:01.158703 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:39:01.158731 systemd[1]: Reached target ignition-diskful.target. May 17 00:39:01.160918 systemd[1]: Mounted sysroot-usr.mount. May 17 00:39:01.163135 systemd[1]: Starting initrd-setup-root.service... May 17 00:39:01.168449 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:39:01.170679 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory May 17 00:39:01.173502 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:39:01.176371 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:39:01.197950 systemd[1]: Finished initrd-setup-root.service. May 17 00:39:01.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.199756 systemd[1]: Starting ignition-mount.service... May 17 00:39:01.201279 systemd[1]: Starting sysroot-boot.service... May 17 00:39:01.205536 bash[805]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:39:01.213667 ignition[807]: INFO : Ignition 2.14.0 May 17 00:39:01.213667 ignition[807]: INFO : Stage: mount May 17 00:39:01.213667 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.213667 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.215426 systemd[1]: Finished ignition-mount.service. May 17 00:39:01.221070 ignition[807]: INFO : mount: mount passed May 17 00:39:01.221070 ignition[807]: INFO : Ignition finished successfully May 17 00:39:01.218549 systemd[1]: Finished sysroot-boot.service. May 17 00:39:01.830799 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:39:01.837127 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) May 17 00:39:01.837152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:39:01.838517 kernel: BTRFS info (device vda6): using free space tree May 17 00:39:01.838536 kernel: BTRFS info (device vda6): has skinny extents May 17 00:39:01.842269 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:39:01.843943 systemd[1]: Starting ignition-files.service... May 17 00:39:01.857355 ignition[835]: INFO : Ignition 2.14.0 May 17 00:39:01.857355 ignition[835]: INFO : Stage: files May 17 00:39:01.859181 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:39:01.859181 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:01.862356 ignition[835]: DEBUG : files: compiled without relabeling support, skipping May 17 00:39:01.863927 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:39:01.863927 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:39:01.867483 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:39:01.867483 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:39:01.867483 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:39:01.867483 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:39:01.867483 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:39:01.866155 unknown[835]: wrote ssh authorized keys file for user: core May 17 00:39:01.949873 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:39:02.156291 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:39:02.158561 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:39:02.158561 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:39:02.246218 systemd-networkd[712]: eth0: Gained IPv6LL May 17 00:39:02.516977 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:39:02.625479 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:39:02.625479 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:39:02.629186 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:39:03.256293 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:39:03.602800 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:39:03.602800 ignition[835]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:39:03.606971 ignition[835]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:39:03.632602 ignition[835]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:39:03.635491 ignition[835]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:39:03.635491 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:39:03.635491 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:39:03.635491 ignition[835]: INFO : files: files passed May 17 00:39:03.635491 ignition[835]: INFO : Ignition finished successfully May 17 00:39:03.659563 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 00:39:03.659585 kernel: audit: type=1130 audit(1747442343.635:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.659596 kernel: audit: type=1130 audit(1747442343.647:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.659606 kernel: audit: type=1130 audit(1747442343.651:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.659616 kernel: audit: type=1131 audit(1747442343.651:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.633739 systemd[1]: Finished ignition-files.service. May 17 00:39:03.636195 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:39:03.641853 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:39:03.665437 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 17 00:39:03.642316 systemd[1]: Starting ignition-quench.service... May 17 00:39:03.668331 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:39:03.644696 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:39:03.647469 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:39:03.647532 systemd[1]: Finished ignition-quench.service. May 17 00:39:03.680447 kernel: audit: type=1130 audit(1747442343.673:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.681222 kernel: audit: type=1131 audit(1747442343.673:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.652090 systemd[1]: Reached target ignition-complete.target. May 17 00:39:03.659973 systemd[1]: Starting initrd-parse-etc.service... May 17 00:39:03.671503 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:39:03.671565 systemd[1]: Finished initrd-parse-etc.service. May 17 00:39:03.673229 systemd[1]: Reached target initrd-fs.target. May 17 00:39:03.680439 systemd[1]: Reached target initrd.target. May 17 00:39:03.681239 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:39:03.681748 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:39:03.691535 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:39:03.696909 kernel: audit: type=1130 audit(1747442343.692:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.692908 systemd[1]: Starting initrd-cleanup.service... May 17 00:39:03.701214 systemd[1]: Stopped target nss-lookup.target. May 17 00:39:03.702158 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:39:03.703816 systemd[1]: Stopped target timers.target. May 17 00:39:03.705440 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:39:03.711605 kernel: audit: type=1131 audit(1747442343.706:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.705524 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:39:03.748892 kernel: audit: type=1131 audit(1747442343.713:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.748918 kernel: audit: type=1131 audit(1747442343.717:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.707129 systemd[1]: Stopped target initrd.target. May 17 00:39:03.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.751032 iscsid[719]: iscsid shutting down. May 17 00:39:03.711668 systemd[1]: Stopped target basic.target. May 17 00:39:03.753066 ignition[875]: INFO : Ignition 2.14.0 May 17 00:39:03.753066 ignition[875]: INFO : Stage: umount May 17 00:39:03.753066 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:39:03.753066 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:39:03.753066 ignition[875]: INFO : umount: umount passed May 17 00:39:03.753066 ignition[875]: INFO : Ignition finished successfully May 17 00:39:03.756000 audit: BPF prog-id=6 op=UNLOAD May 17 00:39:03.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.711782 systemd[1]: Stopped target ignition-complete.target. May 17 00:39:03.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.712010 systemd[1]: Stopped target ignition-diskful.target. May 17 00:39:03.712370 systemd[1]: Stopped target initrd-root-device.target. May 17 00:39:03.712552 systemd[1]: Stopped target remote-fs.target. May 17 00:39:03.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.712765 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:39:03.712957 systemd[1]: Stopped target sysinit.target. May 17 00:39:03.713158 systemd[1]: Stopped target local-fs.target. May 17 00:39:03.713330 systemd[1]: Stopped target local-fs-pre.target. May 17 00:39:03.713519 systemd[1]: Stopped target swap.target. May 17 00:39:03.713702 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:39:03.713795 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:39:03.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.713993 systemd[1]: Stopped target cryptsetup.target. May 17 00:39:03.717151 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:39:03.717229 systemd[1]: Stopped dracut-initqueue.service. May 17 00:39:03.717365 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:39:03.717443 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:39:03.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.720648 systemd[1]: Stopped target paths.target. May 17 00:39:03.720757 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:39:03.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.724127 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:39:03.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.724374 systemd[1]: Stopped target slices.target. May 17 00:39:03.724548 systemd[1]: Stopped target sockets.target. May 17 00:39:03.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.724743 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:39:03.724824 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:39:03.724950 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:39:03.725022 systemd[1]: Stopped ignition-files.service. May 17 00:39:03.725632 systemd[1]: Stopping ignition-mount.service... May 17 00:39:03.726085 systemd[1]: Stopping iscsid.service... May 17 00:39:03.726827 systemd[1]: Stopping sysroot-boot.service... May 17 00:39:03.726992 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:39:03.727134 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:39:03.727310 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:39:03.727414 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:39:03.732596 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:39:03.732683 systemd[1]: Finished initrd-cleanup.service. May 17 00:39:03.732951 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:39:03.733019 systemd[1]: Stopped iscsid.service. May 17 00:39:03.733535 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:39:03.733588 systemd[1]: Stopped ignition-mount.service. May 17 00:39:03.734490 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:39:03.734514 systemd[1]: Closed iscsid.socket. May 17 00:39:03.734750 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:39:03.734778 systemd[1]: Stopped ignition-disks.service. May 17 00:39:03.734954 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:39:03.734977 systemd[1]: Stopped ignition-kargs.service. May 17 00:39:03.735339 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:39:03.735364 systemd[1]: Stopped ignition-setup.service. May 17 00:39:03.736210 systemd[1]: Stopping iscsiuio.service... May 17 00:39:03.738892 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:39:03.738959 systemd[1]: Stopped iscsiuio.service. May 17 00:39:03.739431 systemd[1]: Stopped target network.target. May 17 00:39:03.739548 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:39:03.739570 systemd[1]: Closed iscsiuio.socket. May 17 00:39:03.739786 systemd[1]: Stopping systemd-networkd.service... May 17 00:39:03.739983 systemd[1]: Stopping systemd-resolved.service... May 17 00:39:03.745026 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:39:03.749331 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:39:03.749400 systemd[1]: Stopped systemd-resolved.service. May 17 00:39:03.754171 systemd-networkd[712]: eth0: DHCPv6 lease lost May 17 00:39:03.829000 audit: BPF prog-id=9 op=UNLOAD May 17 00:39:03.754956 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:39:03.755059 systemd[1]: Stopped systemd-networkd.service. May 17 00:39:03.757041 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:39:03.757071 systemd[1]: Closed systemd-networkd.socket. May 17 00:39:03.759771 systemd[1]: Stopping network-cleanup.service... May 17 00:39:03.761170 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:39:03.761215 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:39:03.763083 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:39:03.763123 systemd[1]: Stopped systemd-sysctl.service. May 17 00:39:03.764910 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:39:03.764941 systemd[1]: Stopped systemd-modules-load.service. May 17 00:39:03.765884 systemd[1]: Stopping systemd-udevd.service... May 17 00:39:03.767516 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:39:03.769394 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:39:03.769461 systemd[1]: Stopped network-cleanup.service. May 17 00:39:03.776737 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:39:03.776844 systemd[1]: Stopped systemd-udevd.service. May 17 00:39:03.779349 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:39:03.779376 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:39:03.781262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:39:03.781285 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:39:03.782888 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:39:03.782927 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:39:03.784729 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:39:03.784759 systemd[1]: Stopped dracut-cmdline.service. May 17 00:39:03.786431 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:39:03.786460 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:39:03.788474 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:39:03.790328 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:39:03.790378 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:39:03.792543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:39:03.792573 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:39:03.793619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:39:03.793675 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:39:03.795383 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:39:03.795719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:39:03.795777 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:39:04.127166 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:39:04.127246 systemd[1]: Stopped sysroot-boot.service. May 17 00:39:04.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.129223 systemd[1]: Reached target initrd-switch-root.target. May 17 00:39:04.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.130709 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:39:04.130745 systemd[1]: Stopped initrd-setup-root.service. May 17 00:39:04.132121 systemd[1]: Starting initrd-switch-root.service... May 17 00:39:04.147986 systemd[1]: Switching root. May 17 00:39:04.165755 systemd-journald[198]: Journal stopped May 17 00:39:06.728945 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 17 00:39:06.728994 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:39:06.729013 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:39:06.729022 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:39:06.729031 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:39:06.729040 kernel: SELinux: policy capability open_perms=1 May 17 00:39:06.729052 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:39:06.729061 kernel: SELinux: policy capability always_check_network=0 May 17 00:39:06.729070 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:39:06.729079 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:39:06.729088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:39:06.729118 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:39:06.729129 systemd[1]: Successfully loaded SELinux policy in 36.762ms. May 17 00:39:06.729145 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.315ms. May 17 00:39:06.729158 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:39:06.729169 systemd[1]: Detected virtualization kvm. May 17 00:39:06.729179 systemd[1]: Detected architecture x86-64. May 17 00:39:06.729189 systemd[1]: Detected first boot. May 17 00:39:06.729199 systemd[1]: Initializing machine ID from VM UUID. May 17 00:39:06.729210 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:39:06.729220 systemd[1]: Populated /etc with preset unit settings. May 17 00:39:06.729231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:06.729243 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:06.729254 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:06.729265 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:39:06.729275 systemd[1]: Stopped initrd-switch-root.service. May 17 00:39:06.729285 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:39:06.729297 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:39:06.729308 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:39:06.729320 systemd[1]: Created slice system-getty.slice. May 17 00:39:06.729330 systemd[1]: Created slice system-modprobe.slice. May 17 00:39:06.729341 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:39:06.729354 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:39:06.729370 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:39:06.729385 systemd[1]: Created slice user.slice. May 17 00:39:06.729397 systemd[1]: Started systemd-ask-password-console.path. May 17 00:39:06.729410 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:39:06.729422 systemd[1]: Set up automount boot.automount. May 17 00:39:06.729435 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:39:06.729447 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:39:06.729459 systemd[1]: Stopped target initrd-fs.target. May 17 00:39:06.729471 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:39:06.729483 systemd[1]: Reached target integritysetup.target. May 17 00:39:06.729495 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:39:06.729504 systemd[1]: Reached target remote-fs.target. May 17 00:39:06.729514 systemd[1]: Reached target slices.target. May 17 00:39:06.729524 systemd[1]: Reached target swap.target. May 17 00:39:06.729534 systemd[1]: Reached target torcx.target. May 17 00:39:06.729543 systemd[1]: Reached target veritysetup.target. May 17 00:39:06.729560 systemd[1]: Listening on systemd-coredump.socket. May 17 00:39:06.729571 systemd[1]: Listening on systemd-initctl.socket. May 17 00:39:06.729581 systemd[1]: Listening on systemd-networkd.socket. May 17 00:39:06.729591 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:39:06.729603 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:39:06.729612 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:39:06.729622 systemd[1]: Mounting dev-hugepages.mount... May 17 00:39:06.729632 systemd[1]: Mounting dev-mqueue.mount... May 17 00:39:06.729645 systemd[1]: Mounting media.mount... May 17 00:39:06.729655 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:06.729665 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:39:06.729675 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:39:06.729684 systemd[1]: Mounting tmp.mount... May 17 00:39:06.729696 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:39:06.729705 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:06.729715 systemd[1]: Starting kmod-static-nodes.service... May 17 00:39:06.729725 systemd[1]: Starting modprobe@configfs.service... May 17 00:39:06.729736 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:06.729745 systemd[1]: Starting modprobe@drm.service... May 17 00:39:06.729755 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:06.729766 systemd[1]: Starting modprobe@fuse.service... May 17 00:39:06.729775 systemd[1]: Starting modprobe@loop.service... May 17 00:39:06.729788 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:39:06.729798 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:39:06.729809 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:39:06.729818 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:39:06.729828 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:39:06.729838 systemd[1]: Stopped systemd-journald.service. May 17 00:39:06.729847 kernel: fuse: init (API version 7.34) May 17 00:39:06.729857 kernel: loop: module loaded May 17 00:39:06.729866 systemd[1]: Starting systemd-journald.service... May 17 00:39:06.729878 systemd[1]: Starting systemd-modules-load.service... May 17 00:39:06.729887 systemd[1]: Starting systemd-network-generator.service... May 17 00:39:06.729897 systemd[1]: Starting systemd-remount-fs.service... May 17 00:39:06.729908 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:39:06.729917 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:39:06.729927 systemd[1]: Stopped verity-setup.service. May 17 00:39:06.729937 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:06.729953 systemd-journald[990]: Journal started May 17 00:39:06.729989 systemd-journald[990]: Runtime Journal (/run/log/journal/cf38f2814328459f9ffde259924f5a7d) is 6.0M, max 48.5M, 42.5M free. May 17 00:39:04.221000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:39:04.371000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:39:04.371000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:39:04.371000 audit: BPF prog-id=10 op=LOAD May 17 00:39:04.371000 audit: BPF prog-id=10 op=UNLOAD May 17 00:39:04.371000 audit: BPF prog-id=11 op=LOAD May 17 00:39:04.371000 audit: BPF prog-id=11 op=UNLOAD May 17 00:39:04.403000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:39:04.403000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:04.403000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:39:04.404000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:39:04.404000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:04.404000 audit: CWD cwd="/" May 17 00:39:04.404000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:04.404000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:04.404000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:39:06.592000 audit: BPF prog-id=12 op=LOAD May 17 00:39:06.592000 audit: BPF prog-id=3 op=UNLOAD May 17 00:39:06.592000 audit: BPF prog-id=13 op=LOAD May 17 00:39:06.592000 audit: BPF prog-id=14 op=LOAD May 17 00:39:06.592000 audit: BPF prog-id=4 op=UNLOAD May 17 00:39:06.592000 audit: BPF prog-id=5 op=UNLOAD May 17 00:39:06.593000 audit: BPF prog-id=15 op=LOAD May 17 00:39:06.593000 audit: BPF prog-id=12 op=UNLOAD May 17 00:39:06.593000 audit: BPF prog-id=16 op=LOAD May 17 00:39:06.593000 audit: BPF prog-id=17 op=LOAD May 17 00:39:06.593000 audit: BPF prog-id=13 op=UNLOAD May 17 00:39:06.593000 audit: BPF prog-id=14 op=UNLOAD May 17 00:39:06.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.607000 audit: BPF prog-id=15 op=UNLOAD May 17 00:39:06.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.705000 audit: BPF prog-id=18 op=LOAD May 17 00:39:06.705000 audit: BPF prog-id=19 op=LOAD May 17 00:39:06.705000 audit: BPF prog-id=20 op=LOAD May 17 00:39:06.705000 audit: BPF prog-id=16 op=UNLOAD May 17 00:39:06.706000 audit: BPF prog-id=17 op=UNLOAD May 17 00:39:06.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.726000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:39:06.726000 audit[990]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdf8ca3090 a2=4000 a3=7ffdf8ca312c items=0 ppid=1 pid=990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:06.726000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:39:06.590716 systemd[1]: Queued start job for default target multi-user.target. May 17 00:39:04.401577 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:06.590726 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:39:04.401802 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:39:06.593728 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:39:04.401821 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:39:04.401850 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:39:04.401861 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:39:04.401890 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:39:04.401903 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:39:04.402079 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:39:04.402144 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:39:04.402158 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:39:06.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.732116 systemd[1]: Started systemd-journald.service. May 17 00:39:04.402733 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:39:04.402775 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:39:04.402793 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:39:04.402805 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:39:06.732338 systemd[1]: Mounted dev-hugepages.mount. May 17 00:39:04.402820 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:39:04.402832 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:39:06.286789 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:06.287034 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:06.287150 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:06.287298 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:06.287344 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:39:06.287395 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-17T00:39:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:39:06.733276 systemd[1]: Mounted dev-mqueue.mount. May 17 00:39:06.734119 systemd[1]: Mounted media.mount. May 17 00:39:06.734891 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:39:06.735806 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:39:06.736709 systemd[1]: Mounted tmp.mount. May 17 00:39:06.737652 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:39:06.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.738778 systemd[1]: Finished kmod-static-nodes.service. May 17 00:39:06.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.739853 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:39:06.740051 systemd[1]: Finished modprobe@configfs.service. May 17 00:39:06.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.741210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:06.741349 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:06.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.742401 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:39:06.742584 systemd[1]: Finished modprobe@drm.service. May 17 00:39:06.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.743671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:06.743838 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:06.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.744934 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:39:06.745092 systemd[1]: Finished modprobe@fuse.service. May 17 00:39:06.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.746169 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:06.746359 systemd[1]: Finished modprobe@loop.service. May 17 00:39:06.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.747483 systemd[1]: Finished systemd-modules-load.service. May 17 00:39:06.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.748660 systemd[1]: Finished systemd-network-generator.service. May 17 00:39:06.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.749787 systemd[1]: Finished systemd-remount-fs.service. May 17 00:39:06.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.751094 systemd[1]: Reached target network-pre.target. May 17 00:39:06.752995 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:39:06.754814 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:39:06.755600 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:39:06.757387 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:39:06.759338 systemd[1]: Starting systemd-journal-flush.service... May 17 00:39:06.760297 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:06.761284 systemd[1]: Starting systemd-random-seed.service... May 17 00:39:06.762229 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:06.764820 systemd-journald[990]: Time spent on flushing to /var/log/journal/cf38f2814328459f9ffde259924f5a7d is 13.678ms for 1104 entries. May 17 00:39:06.764820 systemd-journald[990]: System Journal (/var/log/journal/cf38f2814328459f9ffde259924f5a7d) is 8.0M, max 195.6M, 187.6M free. May 17 00:39:06.979648 systemd-journald[990]: Received client request to flush runtime journal. May 17 00:39:06.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.763523 systemd[1]: Starting systemd-sysctl.service... May 17 00:39:06.766261 systemd[1]: Starting systemd-sysusers.service... May 17 00:39:06.769191 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:39:06.771419 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:39:06.980505 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:39:06.791520 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:39:06.793535 systemd[1]: Starting systemd-udev-settle.service... May 17 00:39:06.805775 systemd[1]: Finished systemd-sysctl.service. May 17 00:39:06.809482 systemd[1]: Finished systemd-sysusers.service. May 17 00:39:06.811314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:39:06.827821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:39:06.911746 systemd[1]: Finished systemd-random-seed.service. May 17 00:39:06.912845 systemd[1]: Reached target first-boot-complete.target. May 17 00:39:06.980674 systemd[1]: Finished systemd-journal-flush.service. May 17 00:39:06.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.311091 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:39:07.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.312000 audit: BPF prog-id=21 op=LOAD May 17 00:39:07.313000 audit: BPF prog-id=22 op=LOAD May 17 00:39:07.313000 audit: BPF prog-id=7 op=UNLOAD May 17 00:39:07.313000 audit: BPF prog-id=8 op=UNLOAD May 17 00:39:07.313744 systemd[1]: Starting systemd-udevd.service... May 17 00:39:07.328382 systemd-udevd[1017]: Using default interface naming scheme 'v252'. May 17 00:39:07.341779 systemd[1]: Started systemd-udevd.service. May 17 00:39:07.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.343000 audit: BPF prog-id=23 op=LOAD May 17 00:39:07.344754 systemd[1]: Starting systemd-networkd.service... May 17 00:39:07.349000 audit: BPF prog-id=24 op=LOAD May 17 00:39:07.349000 audit: BPF prog-id=25 op=LOAD May 17 00:39:07.349000 audit: BPF prog-id=26 op=LOAD May 17 00:39:07.350376 systemd[1]: Starting systemd-userdbd.service... May 17 00:39:07.377033 systemd[1]: Started systemd-userdbd.service. May 17 00:39:07.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.379159 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:39:07.387220 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:39:07.411122 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:39:07.415123 kernel: ACPI: button: Power Button [PWRF] May 17 00:39:07.428283 systemd-networkd[1024]: lo: Link UP May 17 00:39:07.428556 systemd-networkd[1024]: lo: Gained carrier May 17 00:39:07.429078 systemd-networkd[1024]: Enumeration completed May 17 00:39:07.429262 systemd[1]: Started systemd-networkd.service. May 17 00:39:07.429282 systemd-networkd[1024]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:39:07.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.430703 systemd-networkd[1024]: eth0: Link UP May 17 00:39:07.430808 systemd-networkd[1024]: eth0: Gained carrier May 17 00:39:07.435000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:39:07.435000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c00fcb44e0 a1=338ac a2=7ff2249efbc5 a3=5 items=110 ppid=1017 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:07.435000 audit: CWD cwd="/" May 17 00:39:07.435000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=1 name=(null) inode=14994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=2 name=(null) inode=14994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=3 name=(null) inode=14995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=4 name=(null) inode=14994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=5 name=(null) inode=14996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=6 name=(null) inode=14994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=7 name=(null) inode=14997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=8 name=(null) inode=14997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=9 name=(null) inode=14998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=10 name=(null) inode=14997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=11 name=(null) inode=14999 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=12 name=(null) inode=14997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=13 name=(null) inode=15000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=14 name=(null) inode=14997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=15 name=(null) inode=15001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=16 name=(null) inode=14997 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=17 name=(null) inode=15002 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=18 name=(null) inode=14994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=19 name=(null) inode=15003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=20 name=(null) inode=15003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=21 name=(null) inode=15004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=22 name=(null) inode=15003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=23 name=(null) inode=15005 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=24 name=(null) inode=15003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=25 name=(null) inode=15006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=26 name=(null) inode=15003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=27 name=(null) inode=15007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=28 name=(null) inode=15003 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=29 name=(null) inode=15008 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=30 name=(null) inode=14994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=31 name=(null) inode=15009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=32 name=(null) inode=15009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=33 name=(null) inode=15010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=34 name=(null) inode=15009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=35 name=(null) inode=15011 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=36 name=(null) inode=15009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=37 name=(null) inode=15012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=38 name=(null) inode=15009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=39 name=(null) inode=15013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=40 name=(null) inode=15009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=41 name=(null) inode=15014 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=42 name=(null) inode=14994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=43 name=(null) inode=15015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=44 name=(null) inode=15015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=45 name=(null) inode=15016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=46 name=(null) inode=15015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=47 name=(null) inode=15017 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=48 name=(null) inode=15015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=49 name=(null) inode=15018 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=50 name=(null) inode=15015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=51 name=(null) inode=15019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=52 name=(null) inode=15015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=53 name=(null) inode=15020 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=55 name=(null) inode=15021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=56 name=(null) inode=15021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=57 name=(null) inode=15022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=58 name=(null) inode=15021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=59 name=(null) inode=15023 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=60 name=(null) inode=15021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=61 name=(null) inode=15024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=62 name=(null) inode=15024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=63 name=(null) inode=15025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=64 name=(null) inode=15024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=65 name=(null) inode=15026 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=66 name=(null) inode=15024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=67 name=(null) inode=15027 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=68 name=(null) inode=15024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=69 name=(null) inode=15028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=70 name=(null) inode=15024 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=71 name=(null) inode=15029 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=72 name=(null) inode=15021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=73 name=(null) inode=15030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=74 name=(null) inode=15030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=75 name=(null) inode=15031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=76 name=(null) inode=15030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=77 name=(null) inode=15032 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=78 name=(null) inode=15030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=79 name=(null) inode=15033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=80 name=(null) inode=15030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=81 name=(null) inode=15034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=82 name=(null) inode=15030 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=83 name=(null) inode=15035 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=84 name=(null) inode=15021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=85 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=86 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=87 name=(null) inode=15037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=88 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=89 name=(null) inode=15038 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=90 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=91 name=(null) inode=15039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=92 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=93 name=(null) inode=15040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=94 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=95 name=(null) inode=15041 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=96 name=(null) inode=15021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=97 name=(null) inode=15042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=98 name=(null) inode=15042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=99 name=(null) inode=15043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=100 name=(null) inode=15042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=101 name=(null) inode=15044 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=102 name=(null) inode=15042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=103 name=(null) inode=15045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=104 name=(null) inode=15042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=105 name=(null) inode=15046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=106 name=(null) inode=15042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=107 name=(null) inode=15047 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PATH item=109 name=(null) inode=15048 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.435000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:39:07.444257 systemd-networkd[1024]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:39:07.449377 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:39:07.449650 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:39:07.449806 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:39:07.506135 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:39:07.513115 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:39:07.532597 kernel: kvm: Nested Virtualization enabled May 17 00:39:07.532642 kernel: SVM: kvm: Nested Paging enabled May 17 00:39:07.533360 kernel: SVM: Virtual VMLOAD VMSAVE supported May 17 00:39:07.533469 kernel: SVM: Virtual GIF supported May 17 00:39:07.548133 kernel: EDAC MC: Ver: 3.0.0 May 17 00:39:07.573462 systemd[1]: Finished systemd-udev-settle.service. May 17 00:39:07.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.575471 systemd[1]: Starting lvm2-activation-early.service... May 17 00:39:07.582222 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:39:07.610166 systemd[1]: Finished lvm2-activation-early.service. May 17 00:39:07.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.611300 systemd[1]: Reached target cryptsetup.target. May 17 00:39:07.613121 systemd[1]: Starting lvm2-activation.service... May 17 00:39:07.616734 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:39:07.645374 systemd[1]: Finished lvm2-activation.service. May 17 00:39:07.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.646477 systemd[1]: Reached target local-fs-pre.target. May 17 00:39:07.647436 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:39:07.647458 systemd[1]: Reached target local-fs.target. May 17 00:39:07.648356 systemd[1]: Reached target machines.target. May 17 00:39:07.650258 systemd[1]: Starting ldconfig.service... May 17 00:39:07.651300 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:07.651342 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:07.652249 systemd[1]: Starting systemd-boot-update.service... May 17 00:39:07.654606 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:39:07.657029 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:39:07.659925 systemd[1]: Starting systemd-sysext.service... May 17 00:39:07.661956 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1055 (bootctl) May 17 00:39:07.662898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:39:07.672927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:39:07.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.677273 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:39:07.681688 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:39:07.681870 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:39:07.694143 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:39:07.696678 systemd-fsck[1062]: fsck.fat 4.2 (2021-01-31) May 17 00:39:07.696678 systemd-fsck[1062]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:39:07.698269 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:39:07.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.702276 systemd[1]: Mounting boot.mount... May 17 00:39:07.729392 systemd[1]: Mounted boot.mount. May 17 00:39:07.740545 systemd[1]: Finished systemd-boot-update.service. May 17 00:39:07.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:07.760189 ldconfig[1054]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:39:08.317129 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:39:08.338119 kernel: loop1: detected capacity change from 0 to 224512 May 17 00:39:08.345163 (sd-sysext)[1068]: Using extensions 'kubernetes'. May 17 00:39:08.345595 (sd-sysext)[1068]: Merged extensions into '/usr'. May 17 00:39:08.346703 systemd[1]: Finished ldconfig.service. May 17 00:39:08.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.364261 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:39:08.365143 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:39:08.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.366567 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.367756 systemd[1]: Mounting usr-share-oem.mount... May 17 00:39:08.368682 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:08.369669 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:08.371488 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:08.373358 systemd[1]: Starting modprobe@loop.service... May 17 00:39:08.374512 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:08.374647 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.374785 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.377275 systemd[1]: Mounted usr-share-oem.mount. May 17 00:39:08.378410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:08.378553 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:08.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.379828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:08.379948 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:08.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.381287 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:08.381401 systemd[1]: Finished modprobe@loop.service. May 17 00:39:08.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.382911 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:08.383031 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:08.383855 systemd[1]: Finished systemd-sysext.service. May 17 00:39:08.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.385614 systemd[1]: Starting ensure-sysext.service... May 17 00:39:08.387217 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:39:08.392848 systemd[1]: Reloading. May 17 00:39:08.399532 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:39:08.400560 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:39:08.402689 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:39:08.437267 /usr/lib/systemd/system-generators/torcx-generator[1094]: time="2025-05-17T00:39:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:08.437290 /usr/lib/systemd/system-generators/torcx-generator[1094]: time="2025-05-17T00:39:08Z" level=info msg="torcx already run" May 17 00:39:08.511315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:08.511331 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:08.528568 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:08.580000 audit: BPF prog-id=27 op=LOAD May 17 00:39:08.580000 audit: BPF prog-id=23 op=UNLOAD May 17 00:39:08.581000 audit: BPF prog-id=28 op=LOAD May 17 00:39:08.581000 audit: BPF prog-id=18 op=UNLOAD May 17 00:39:08.581000 audit: BPF prog-id=29 op=LOAD May 17 00:39:08.581000 audit: BPF prog-id=30 op=LOAD May 17 00:39:08.581000 audit: BPF prog-id=19 op=UNLOAD May 17 00:39:08.581000 audit: BPF prog-id=20 op=UNLOAD May 17 00:39:08.582000 audit: BPF prog-id=31 op=LOAD May 17 00:39:08.582000 audit: BPF prog-id=32 op=LOAD May 17 00:39:08.582000 audit: BPF prog-id=21 op=UNLOAD May 17 00:39:08.583000 audit: BPF prog-id=22 op=UNLOAD May 17 00:39:08.584000 audit: BPF prog-id=33 op=LOAD May 17 00:39:08.584000 audit: BPF prog-id=24 op=UNLOAD May 17 00:39:08.584000 audit: BPF prog-id=34 op=LOAD May 17 00:39:08.584000 audit: BPF prog-id=35 op=LOAD May 17 00:39:08.584000 audit: BPF prog-id=25 op=UNLOAD May 17 00:39:08.584000 audit: BPF prog-id=26 op=UNLOAD May 17 00:39:08.586022 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:39:08.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.590086 systemd[1]: Starting audit-rules.service... May 17 00:39:08.591865 systemd[1]: Starting clean-ca-certificates.service... May 17 00:39:08.593838 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:39:08.596000 audit: BPF prog-id=36 op=LOAD May 17 00:39:08.597895 systemd[1]: Starting systemd-resolved.service... May 17 00:39:08.599000 audit: BPF prog-id=37 op=LOAD May 17 00:39:08.600359 systemd[1]: Starting systemd-timesyncd.service... May 17 00:39:08.602411 systemd[1]: Starting systemd-update-utmp.service... May 17 00:39:08.604084 systemd[1]: Finished clean-ca-certificates.service. May 17 00:39:08.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:08.606000 audit[1148]: SYSTEM_BOOT pid=1148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:39:08.612076 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.612425 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:08.614309 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:08.616645 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:08.618000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:39:08.618000 audit[1159]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc4694ba0 a2=420 a3=0 items=0 ppid=1137 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:08.618000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:39:08.618728 systemd[1]: Starting modprobe@loop.service... May 17 00:39:08.619062 augenrules[1159]: No rules May 17 00:39:08.619625 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:08.619789 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.619986 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:39:08.620143 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.621381 systemd[1]: Finished audit-rules.service. May 17 00:39:08.622926 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:39:08.624550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:08.624669 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:08.626028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:08.626131 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:08.627624 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:08.627713 systemd[1]: Finished modprobe@loop.service. May 17 00:39:08.629571 systemd[1]: Finished systemd-update-utmp.service. May 17 00:39:08.635641 systemd[1]: Finished ensure-sysext.service. May 17 00:39:08.637302 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.637444 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:08.638207 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:08.639667 systemd[1]: Starting modprobe@drm.service... May 17 00:39:08.641082 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:08.642698 systemd[1]: Starting modprobe@loop.service... May 17 00:39:08.643704 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:08.643744 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.644614 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:39:08.647176 systemd[1]: Starting systemd-update-done.service... May 17 00:39:08.648340 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:39:08.648378 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:08.648892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:08.649032 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:08.650387 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:39:08.650517 systemd[1]: Finished modprobe@drm.service. May 17 00:39:08.651789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:08.651907 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:08.653391 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:08.653575 systemd[1]: Finished modprobe@loop.service. May 17 00:39:08.655067 systemd[1]: Finished systemd-update-done.service. May 17 00:39:08.656511 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:08.656557 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:08.658619 systemd[1]: Started systemd-timesyncd.service. May 17 00:39:08.660051 systemd-timesyncd[1147]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:39:08.660131 systemd-timesyncd[1147]: Initial clock synchronization to Sat 2025-05-17 00:39:08.787718 UTC. May 17 00:39:08.660172 systemd[1]: Reached target time-set.target. May 17 00:39:08.666576 systemd-resolved[1144]: Positive Trust Anchors: May 17 00:39:08.666588 systemd-resolved[1144]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:39:08.666622 systemd-resolved[1144]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:39:08.673162 systemd-resolved[1144]: Defaulting to hostname 'linux'. May 17 00:39:08.674390 systemd[1]: Started systemd-resolved.service. May 17 00:39:08.675362 systemd[1]: Reached target network.target. May 17 00:39:08.676203 systemd[1]: Reached target nss-lookup.target. May 17 00:39:08.677069 systemd[1]: Reached target sysinit.target. May 17 00:39:08.677971 systemd[1]: Started motdgen.path. May 17 00:39:08.678743 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:39:08.680052 systemd[1]: Started logrotate.timer. May 17 00:39:08.680898 systemd[1]: Started mdadm.timer. May 17 00:39:08.681673 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:39:08.682657 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:39:08.682679 systemd[1]: Reached target paths.target. May 17 00:39:08.683483 systemd[1]: Reached target timers.target. May 17 00:39:08.684548 systemd[1]: Listening on dbus.socket. May 17 00:39:08.686218 systemd[1]: Starting docker.socket... May 17 00:39:08.688727 systemd[1]: Listening on sshd.socket. May 17 00:39:08.689641 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.689944 systemd[1]: Listening on docker.socket. May 17 00:39:08.690828 systemd[1]: Reached target sockets.target. May 17 00:39:08.691713 systemd[1]: Reached target basic.target. May 17 00:39:08.692615 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:39:08.692635 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:39:08.693296 systemd[1]: Starting containerd.service... May 17 00:39:08.694995 systemd[1]: Starting dbus.service... May 17 00:39:08.696586 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:39:08.698422 systemd[1]: Starting extend-filesystems.service... May 17 00:39:08.699480 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:39:08.700203 jq[1176]: false May 17 00:39:08.700204 systemd[1]: Starting motdgen.service... May 17 00:39:08.701927 systemd[1]: Starting prepare-helm.service... May 17 00:39:08.704131 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:39:08.706335 systemd[1]: Starting sshd-keygen.service... May 17 00:39:08.709844 systemd[1]: Starting systemd-logind.service... May 17 00:39:08.710994 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:08.711066 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:39:08.711723 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:39:08.712986 systemd[1]: Starting update-engine.service... May 17 00:39:08.715338 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:39:08.718358 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:39:08.718554 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:39:08.719265 jq[1194]: true May 17 00:39:08.718883 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:39:08.719035 systemd[1]: Finished motdgen.service. May 17 00:39:08.721352 dbus-daemon[1175]: [system] SELinux support is enabled May 17 00:39:08.721889 systemd[1]: Started dbus.service. May 17 00:39:08.725820 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:39:08.726004 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:39:08.728508 extend-filesystems[1177]: Found loop1 May 17 00:39:08.728508 extend-filesystems[1177]: Found sr0 May 17 00:39:08.728508 extend-filesystems[1177]: Found vda May 17 00:39:08.728508 extend-filesystems[1177]: Found vda1 May 17 00:39:08.728508 extend-filesystems[1177]: Found vda2 May 17 00:39:08.728508 extend-filesystems[1177]: Found vda3 May 17 00:39:08.728508 extend-filesystems[1177]: Found usr May 17 00:39:08.728508 extend-filesystems[1177]: Found vda4 May 17 00:39:08.728508 extend-filesystems[1177]: Found vda6 May 17 00:39:08.728508 extend-filesystems[1177]: Found vda7 May 17 00:39:08.728508 extend-filesystems[1177]: Found vda9 May 17 00:39:08.728508 extend-filesystems[1177]: Checking size of /dev/vda9 May 17 00:39:08.754010 update_engine[1192]: I0517 00:39:08.746279 1192 main.cc:92] Flatcar Update Engine starting May 17 00:39:08.754183 tar[1196]: linux-amd64/LICENSE May 17 00:39:08.754183 tar[1196]: linux-amd64/helm May 17 00:39:08.739687 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:39:08.739714 systemd[1]: Reached target system-config.target. May 17 00:39:08.754660 jq[1200]: true May 17 00:39:08.743286 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:39:08.743303 systemd[1]: Reached target user-config.target. May 17 00:39:08.755435 extend-filesystems[1177]: Resized partition /dev/vda9 May 17 00:39:08.758506 update_engine[1192]: I0517 00:39:08.756855 1192 update_check_scheduler.cc:74] Next update check in 3m24s May 17 00:39:08.758552 extend-filesystems[1212]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:39:08.761773 systemd[1]: Started update-engine.service. May 17 00:39:08.764696 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:39:08.767225 systemd[1]: Started locksmithd.service. May 17 00:39:08.787949 systemd-logind[1189]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:39:08.787973 systemd-logind[1189]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:39:08.788186 systemd-logind[1189]: New seat seat0. May 17 00:39:08.791115 env[1201]: time="2025-05-17T00:39:08.789892778Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:39:08.794561 systemd[1]: Started systemd-logind.service. May 17 00:39:08.802115 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:39:08.811195 env[1201]: time="2025-05-17T00:39:08.811149469Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:39:08.827061 extend-filesystems[1212]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:39:08.827061 extend-filesystems[1212]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:39:08.827061 extend-filesystems[1212]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:39:08.831933 extend-filesystems[1177]: Resized filesystem in /dev/vda9 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.831430551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.832844363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.832867366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.833062722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.833076969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.833088230Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.833107586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.834197491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.834388499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:08.834693 env[1201]: time="2025-05-17T00:39:08.834502092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:08.832029 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:39:08.834975 env[1201]: time="2025-05-17T00:39:08.834518513Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:39:08.834975 env[1201]: time="2025-05-17T00:39:08.834558688Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:39:08.834975 env[1201]: time="2025-05-17T00:39:08.834568657Z" level=info msg="metadata content store policy set" policy=shared May 17 00:39:08.832214 systemd[1]: Finished extend-filesystems.service. May 17 00:39:08.836331 bash[1227]: Updated "/home/core/.ssh/authorized_keys" May 17 00:39:08.836955 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:39:08.842079 env[1201]: time="2025-05-17T00:39:08.842046563Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:39:08.842133 env[1201]: time="2025-05-17T00:39:08.842081509Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:39:08.842133 env[1201]: time="2025-05-17T00:39:08.842112647Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:39:08.842181 env[1201]: time="2025-05-17T00:39:08.842147362Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842181 env[1201]: time="2025-05-17T00:39:08.842165687Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842216 env[1201]: time="2025-05-17T00:39:08.842182759Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842216 env[1201]: time="2025-05-17T00:39:08.842197156Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842252 env[1201]: time="2025-05-17T00:39:08.842216031Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842252 env[1201]: time="2025-05-17T00:39:08.842231691Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842304 env[1201]: time="2025-05-17T00:39:08.842247270Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842304 env[1201]: time="2025-05-17T00:39:08.842262739Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842304 env[1201]: time="2025-05-17T00:39:08.842276685Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:39:08.842388 env[1201]: time="2025-05-17T00:39:08.842366093Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:39:08.842473 env[1201]: time="2025-05-17T00:39:08.842454238Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:39:08.842757 env[1201]: time="2025-05-17T00:39:08.842735496Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:39:08.842784 env[1201]: time="2025-05-17T00:39:08.842771854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842804 env[1201]: time="2025-05-17T00:39:08.842789317Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:39:08.842854 env[1201]: time="2025-05-17T00:39:08.842837257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842876 env[1201]: time="2025-05-17T00:39:08.842855831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842876 env[1201]: time="2025-05-17T00:39:08.842870759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842912 env[1201]: time="2025-05-17T00:39:08.842884435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842912 env[1201]: time="2025-05-17T00:39:08.842899153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842951 env[1201]: time="2025-05-17T00:39:08.842913780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842951 env[1201]: time="2025-05-17T00:39:08.842927997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:39:08.842951 env[1201]: time="2025-05-17T00:39:08.842942093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:39:08.843006 env[1201]: time="2025-05-17T00:39:08.842958624Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:39:08.843160 env[1201]: time="2025-05-17T00:39:08.843084610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:39:08.843160 env[1201]: time="2025-05-17T00:39:08.843125016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:39:08.843160 env[1201]: time="2025-05-17T00:39:08.843141267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:39:08.843160 env[1201]: time="2025-05-17T00:39:08.843156095Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:39:08.843261 env[1201]: time="2025-05-17T00:39:08.843173237Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:39:08.843261 env[1201]: time="2025-05-17T00:39:08.843198404Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:39:08.843261 env[1201]: time="2025-05-17T00:39:08.843219213Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:39:08.843261 env[1201]: time="2025-05-17T00:39:08.843255862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:39:08.843560 env[1201]: time="2025-05-17T00:39:08.843495812Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:39:08.844199 env[1201]: time="2025-05-17T00:39:08.843564090Z" level=info msg="Connect containerd service" May 17 00:39:08.844199 env[1201]: time="2025-05-17T00:39:08.843603043Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:39:08.844199 env[1201]: time="2025-05-17T00:39:08.844162873Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:39:08.844253 locksmithd[1221]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:39:08.844540 systemd[1]: Started containerd.service. May 17 00:39:08.844743 env[1201]: time="2025-05-17T00:39:08.844364561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:39:08.844743 env[1201]: time="2025-05-17T00:39:08.844403745Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:39:08.844743 env[1201]: time="2025-05-17T00:39:08.844450763Z" level=info msg="containerd successfully booted in 0.060231s" May 17 00:39:08.845291 env[1201]: time="2025-05-17T00:39:08.845248900Z" level=info msg="Start subscribing containerd event" May 17 00:39:08.845336 env[1201]: time="2025-05-17T00:39:08.845309584Z" level=info msg="Start recovering state" May 17 00:39:08.846132 env[1201]: time="2025-05-17T00:39:08.845383482Z" level=info msg="Start event monitor" May 17 00:39:08.846132 env[1201]: time="2025-05-17T00:39:08.845414380Z" level=info msg="Start snapshots syncer" May 17 00:39:08.846132 env[1201]: time="2025-05-17T00:39:08.845429168Z" level=info msg="Start cni network conf syncer for default" May 17 00:39:08.846132 env[1201]: time="2025-05-17T00:39:08.845442834Z" level=info msg="Start streaming server" May 17 00:39:09.197992 tar[1196]: linux-amd64/README.md May 17 00:39:09.202349 systemd[1]: Finished prepare-helm.service. May 17 00:39:09.286329 systemd-networkd[1024]: eth0: Gained IPv6LL May 17 00:39:09.288074 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:39:09.289729 systemd[1]: Reached target network-online.target. May 17 00:39:09.292173 systemd[1]: Starting kubelet.service... May 17 00:39:09.949018 systemd[1]: Started kubelet.service. May 17 00:39:10.351170 kubelet[1242]: E0517 00:39:10.351043 1242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:10.352957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:10.353126 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:10.391740 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:39:10.409030 systemd[1]: Finished sshd-keygen.service. May 17 00:39:10.411700 systemd[1]: Starting issuegen.service... May 17 00:39:10.415869 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:39:10.416022 systemd[1]: Finished issuegen.service. May 17 00:39:10.418090 systemd[1]: Starting systemd-user-sessions.service... May 17 00:39:10.422866 systemd[1]: Finished systemd-user-sessions.service. May 17 00:39:10.424933 systemd[1]: Started getty@tty1.service. May 17 00:39:10.426718 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:39:10.427881 systemd[1]: Reached target getty.target. May 17 00:39:10.428788 systemd[1]: Reached target multi-user.target. May 17 00:39:10.430573 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:39:10.436448 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:39:10.436566 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:39:10.437689 systemd[1]: Startup finished in 640ms (kernel) + 5.461s (initrd) + 6.255s (userspace) = 12.357s. May 17 00:39:11.468680 systemd[1]: Created slice system-sshd.slice. May 17 00:39:11.469626 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:58876.service. May 17 00:39:11.510795 sshd[1264]: Accepted publickey for core from 10.0.0.1 port 58876 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.512295 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.519586 systemd[1]: Created slice user-500.slice. May 17 00:39:11.520558 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:39:11.522266 systemd-logind[1189]: New session 1 of user core. May 17 00:39:11.528705 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:39:11.530054 systemd[1]: Starting user@500.service... May 17 00:39:11.532649 (systemd)[1267]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.596843 systemd[1267]: Queued start job for default target default.target. May 17 00:39:11.597288 systemd[1267]: Reached target paths.target. May 17 00:39:11.597316 systemd[1267]: Reached target sockets.target. May 17 00:39:11.597332 systemd[1267]: Reached target timers.target. May 17 00:39:11.597346 systemd[1267]: Reached target basic.target. May 17 00:39:11.597390 systemd[1267]: Reached target default.target. May 17 00:39:11.597419 systemd[1267]: Startup finished in 60ms. May 17 00:39:11.597475 systemd[1]: Started user@500.service. May 17 00:39:11.598449 systemd[1]: Started session-1.scope. May 17 00:39:11.651830 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:58886.service. May 17 00:39:11.693879 sshd[1276]: Accepted publickey for core from 10.0.0.1 port 58886 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.695041 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.699067 systemd-logind[1189]: New session 2 of user core. May 17 00:39:11.699992 systemd[1]: Started session-2.scope. May 17 00:39:11.755340 sshd[1276]: pam_unix(sshd:session): session closed for user core May 17 00:39:11.758331 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:58886.service: Deactivated successfully. May 17 00:39:11.758904 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:39:11.759387 systemd-logind[1189]: Session 2 logged out. Waiting for processes to exit. May 17 00:39:11.760699 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:58888.service. May 17 00:39:11.761298 systemd-logind[1189]: Removed session 2. May 17 00:39:11.799477 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 58888 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.800490 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.803808 systemd-logind[1189]: New session 3 of user core. May 17 00:39:11.804660 systemd[1]: Started session-3.scope. May 17 00:39:11.854705 sshd[1282]: pam_unix(sshd:session): session closed for user core May 17 00:39:11.857313 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:58888.service: Deactivated successfully. May 17 00:39:11.857769 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:39:11.858260 systemd-logind[1189]: Session 3 logged out. Waiting for processes to exit. May 17 00:39:11.859190 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:58898.service. May 17 00:39:11.860177 systemd-logind[1189]: Removed session 3. May 17 00:39:11.899487 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 58898 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:11.900683 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:11.904478 systemd-logind[1189]: New session 4 of user core. May 17 00:39:11.905331 systemd[1]: Started session-4.scope. May 17 00:39:11.959212 sshd[1288]: pam_unix(sshd:session): session closed for user core May 17 00:39:11.961996 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:58898.service: Deactivated successfully. May 17 00:39:11.962672 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:39:11.963248 systemd-logind[1189]: Session 4 logged out. Waiting for processes to exit. May 17 00:39:11.964355 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:58908.service. May 17 00:39:11.965025 systemd-logind[1189]: Removed session 4. May 17 00:39:12.004236 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 58908 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:39:12.005304 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:12.008636 systemd-logind[1189]: New session 5 of user core. May 17 00:39:12.009562 systemd[1]: Started session-5.scope. May 17 00:39:12.065220 sudo[1298]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:39:12.065400 sudo[1298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:39:12.084966 systemd[1]: Starting docker.service... May 17 00:39:12.122595 env[1310]: time="2025-05-17T00:39:12.122534250Z" level=info msg="Starting up" May 17 00:39:12.124057 env[1310]: time="2025-05-17T00:39:12.124017198Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:39:12.124057 env[1310]: time="2025-05-17T00:39:12.124043556Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:39:12.124163 env[1310]: time="2025-05-17T00:39:12.124068619Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:39:12.124163 env[1310]: time="2025-05-17T00:39:12.124078816Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:39:12.125797 env[1310]: time="2025-05-17T00:39:12.125755433Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:39:12.125797 env[1310]: time="2025-05-17T00:39:12.125780191Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:39:12.125797 env[1310]: time="2025-05-17T00:39:12.125796979Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:39:12.125797 env[1310]: time="2025-05-17T00:39:12.125805019Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:39:12.132648 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2504127562-merged.mount: Deactivated successfully. May 17 00:39:13.710008 env[1310]: time="2025-05-17T00:39:13.709952094Z" level=info msg="Loading containers: start." May 17 00:39:14.033152 kernel: Initializing XFRM netlink socket May 17 00:39:14.062229 env[1310]: time="2025-05-17T00:39:14.062181788Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:39:14.110705 systemd-networkd[1024]: docker0: Link UP May 17 00:39:14.406292 env[1310]: time="2025-05-17T00:39:14.406183827Z" level=info msg="Loading containers: done." May 17 00:39:14.528477 env[1310]: time="2025-05-17T00:39:14.528435362Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:39:14.528649 env[1310]: time="2025-05-17T00:39:14.528619753Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:39:14.528750 env[1310]: time="2025-05-17T00:39:14.528735236Z" level=info msg="Daemon has completed initialization" May 17 00:39:14.617230 systemd[1]: Started docker.service. May 17 00:39:14.623633 env[1310]: time="2025-05-17T00:39:14.623572297Z" level=info msg="API listen on /run/docker.sock" May 17 00:39:15.385613 env[1201]: time="2025-05-17T00:39:15.385564372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:39:15.985236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241683805.mount: Deactivated successfully. May 17 00:39:18.020919 env[1201]: time="2025-05-17T00:39:18.020845657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:18.022826 env[1201]: time="2025-05-17T00:39:18.022771554Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:18.024829 env[1201]: time="2025-05-17T00:39:18.024788466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:18.027125 env[1201]: time="2025-05-17T00:39:18.027067521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:18.027906 env[1201]: time="2025-05-17T00:39:18.027869097Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:39:18.028450 env[1201]: time="2025-05-17T00:39:18.028426249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:39:19.878716 env[1201]: time="2025-05-17T00:39:19.878637969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:19.921282 env[1201]: time="2025-05-17T00:39:19.921227215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:19.950638 env[1201]: time="2025-05-17T00:39:19.950595659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:20.003716 env[1201]: time="2025-05-17T00:39:20.003668355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:20.004484 env[1201]: time="2025-05-17T00:39:20.004451497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:39:20.005052 env[1201]: time="2025-05-17T00:39:20.005022495Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:39:20.603959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:39:20.604193 systemd[1]: Stopped kubelet.service. May 17 00:39:20.605520 systemd[1]: Starting kubelet.service... May 17 00:39:20.982419 systemd[1]: Started kubelet.service. May 17 00:39:22.573746 kubelet[1446]: E0517 00:39:22.573670 1446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:22.576487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:22.576599 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:25.653756 env[1201]: time="2025-05-17T00:39:25.653688531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:25.724963 env[1201]: time="2025-05-17T00:39:25.724895600Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:25.774518 env[1201]: time="2025-05-17T00:39:25.774476125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:25.808343 env[1201]: time="2025-05-17T00:39:25.808290563Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:25.809043 env[1201]: time="2025-05-17T00:39:25.809003389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:39:25.809665 env[1201]: time="2025-05-17T00:39:25.809639216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:39:29.159345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921333483.mount: Deactivated successfully. May 17 00:39:29.829376 env[1201]: time="2025-05-17T00:39:29.829305123Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.839273 env[1201]: time="2025-05-17T00:39:29.839202984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.840752 env[1201]: time="2025-05-17T00:39:29.840715659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.842258 env[1201]: time="2025-05-17T00:39:29.842232696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:29.842642 env[1201]: time="2025-05-17T00:39:29.842619107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:39:29.843140 env[1201]: time="2025-05-17T00:39:29.843109145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:39:30.501351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256039941.mount: Deactivated successfully. May 17 00:39:32.553670 env[1201]: time="2025-05-17T00:39:32.553603510Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.560986 env[1201]: time="2025-05-17T00:39:32.560945593Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.563324 env[1201]: time="2025-05-17T00:39:32.563252524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.565203 env[1201]: time="2025-05-17T00:39:32.565171811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:32.566000 env[1201]: time="2025-05-17T00:39:32.565949133Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:39:32.566493 env[1201]: time="2025-05-17T00:39:32.566434331Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:39:32.827514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:39:32.827731 systemd[1]: Stopped kubelet.service. May 17 00:39:32.829206 systemd[1]: Starting kubelet.service... May 17 00:39:32.914075 systemd[1]: Started kubelet.service. May 17 00:39:33.008299 kubelet[1457]: E0517 00:39:33.008230 1457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:33.010073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:33.010200 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:33.509818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803018346.mount: Deactivated successfully. May 17 00:39:33.520807 env[1201]: time="2025-05-17T00:39:33.520753818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:33.523322 env[1201]: time="2025-05-17T00:39:33.523250825Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:33.525023 env[1201]: time="2025-05-17T00:39:33.524984287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:33.526834 env[1201]: time="2025-05-17T00:39:33.526800888Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:33.527334 env[1201]: time="2025-05-17T00:39:33.527299916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:39:33.527813 env[1201]: time="2025-05-17T00:39:33.527789681Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:39:34.729317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401359978.mount: Deactivated successfully. May 17 00:39:42.540367 env[1201]: time="2025-05-17T00:39:42.540296987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:42.704700 env[1201]: time="2025-05-17T00:39:42.704627777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:42.789111 env[1201]: time="2025-05-17T00:39:42.789045072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:42.862615 env[1201]: time="2025-05-17T00:39:42.862479452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:42.863477 env[1201]: time="2025-05-17T00:39:42.863434465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:39:43.135234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:39:43.135428 systemd[1]: Stopped kubelet.service. May 17 00:39:43.136678 systemd[1]: Starting kubelet.service... May 17 00:39:43.236079 systemd[1]: Started kubelet.service. May 17 00:39:43.280253 kubelet[1477]: E0517 00:39:43.280200 1477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:43.281932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:43.282067 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:45.176003 systemd[1]: Stopped kubelet.service. May 17 00:39:45.178211 systemd[1]: Starting kubelet.service... May 17 00:39:45.210381 systemd[1]: Reloading. May 17 00:39:45.278121 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2025-05-17T00:39:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:45.278516 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2025-05-17T00:39:45Z" level=info msg="torcx already run" May 17 00:39:47.242064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:47.242080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:47.259006 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:47.335028 systemd[1]: Started kubelet.service. May 17 00:39:47.336447 systemd[1]: Stopping kubelet.service... May 17 00:39:47.336809 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:39:47.336976 systemd[1]: Stopped kubelet.service. May 17 00:39:47.338365 systemd[1]: Starting kubelet.service... May 17 00:39:48.151341 systemd[1]: Started kubelet.service. May 17 00:39:48.185720 kubelet[1572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:48.185720 kubelet[1572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:39:48.185720 kubelet[1572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:48.186010 kubelet[1572]: I0517 00:39:48.185778 1572 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:39:48.713375 kubelet[1572]: I0517 00:39:48.713321 1572 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:39:48.713375 kubelet[1572]: I0517 00:39:48.713369 1572 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:39:48.713682 kubelet[1572]: I0517 00:39:48.713655 1572 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:39:48.745474 kubelet[1572]: E0517 00:39:48.745423 1572 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:48.745757 kubelet[1572]: I0517 00:39:48.745741 1572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:39:48.751474 kubelet[1572]: E0517 00:39:48.751432 1572 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:39:48.751474 kubelet[1572]: I0517 00:39:48.751462 1572 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:39:48.755327 kubelet[1572]: I0517 00:39:48.755283 1572 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:39:48.755668 kubelet[1572]: I0517 00:39:48.755619 1572 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:39:48.755889 kubelet[1572]: I0517 00:39:48.755663 1572 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:39:48.756509 kubelet[1572]: I0517 00:39:48.756479 1572 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:39:48.756509 kubelet[1572]: I0517 00:39:48.756505 1572 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:39:48.756676 kubelet[1572]: I0517 00:39:48.756655 1572 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:48.759191 kubelet[1572]: I0517 00:39:48.759158 1572 kubelet.go:446] "Attempting to sync node with API server" May 17 00:39:48.759237 kubelet[1572]: I0517 00:39:48.759194 1572 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:39:48.759237 kubelet[1572]: I0517 00:39:48.759218 1572 kubelet.go:352] "Adding apiserver pod source" May 17 00:39:48.759237 kubelet[1572]: I0517 00:39:48.759231 1572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:39:48.764804 kubelet[1572]: W0517 00:39:48.764744 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:48.764804 kubelet[1572]: E0517 00:39:48.764806 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:48.764992 kubelet[1572]: W0517 00:39:48.764860 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:48.764992 kubelet[1572]: E0517 00:39:48.764887 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:48.769851 kubelet[1572]: I0517 00:39:48.769823 1572 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:39:48.770226 kubelet[1572]: I0517 00:39:48.770200 1572 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:39:48.770277 kubelet[1572]: W0517 00:39:48.770254 1572 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:39:48.774695 kubelet[1572]: I0517 00:39:48.774647 1572 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:39:48.774767 kubelet[1572]: I0517 00:39:48.774723 1572 server.go:1287] "Started kubelet" May 17 00:39:48.774864 kubelet[1572]: I0517 00:39:48.774823 1572 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:39:48.776636 kubelet[1572]: I0517 00:39:48.776616 1572 server.go:479] "Adding debug handlers to kubelet server" May 17 00:39:48.788018 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:39:48.795499 kubelet[1572]: I0517 00:39:48.795183 1572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:39:48.797073 kubelet[1572]: I0517 00:39:48.797057 1572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:39:48.799225 kubelet[1572]: E0517 00:39:48.798180 1572 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18402998205ff8fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:39:48.77468083 +0000 UTC m=+0.620425078,LastTimestamp:2025-05-17 00:39:48.77468083 +0000 UTC m=+0.620425078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:39:48.799638 kubelet[1572]: E0517 00:39:48.799499 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:48.799738 kubelet[1572]: I0517 00:39:48.799723 1572 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:39:48.800090 kubelet[1572]: I0517 00:39:48.800077 1572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:39:48.800200 kubelet[1572]: E0517 00:39:48.799768 1572 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:39:48.800330 kubelet[1572]: I0517 00:39:48.800318 1572 reconciler.go:26] "Reconciler: start to sync state" May 17 00:39:48.800409 kubelet[1572]: I0517 00:39:48.800390 1572 factory.go:221] Registration of the systemd container factory successfully May 17 00:39:48.800492 kubelet[1572]: I0517 00:39:48.800471 1572 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:39:48.800750 kubelet[1572]: I0517 00:39:48.800677 1572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:39:48.801288 kubelet[1572]: I0517 00:39:48.801269 1572 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:39:48.801422 kubelet[1572]: E0517 00:39:48.801395 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" May 17 00:39:48.801585 kubelet[1572]: I0517 00:39:48.801555 1572 factory.go:221] Registration of the containerd container factory successfully May 17 00:39:48.802150 kubelet[1572]: W0517 00:39:48.802066 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:48.802150 kubelet[1572]: E0517 00:39:48.802128 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:48.810462 kubelet[1572]: I0517 00:39:48.810410 1572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:39:48.810676 kubelet[1572]: I0517 00:39:48.810642 1572 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:39:48.810676 kubelet[1572]: I0517 00:39:48.810672 1572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:39:48.810759 kubelet[1572]: I0517 00:39:48.810687 1572 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:48.811567 kubelet[1572]: I0517 00:39:48.811547 1572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:39:48.811681 kubelet[1572]: I0517 00:39:48.811578 1572 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:39:48.811681 kubelet[1572]: I0517 00:39:48.811596 1572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:39:48.811681 kubelet[1572]: I0517 00:39:48.811604 1572 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:39:48.811681 kubelet[1572]: E0517 00:39:48.811645 1572 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:39:48.812352 kubelet[1572]: W0517 00:39:48.812310 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:48.812412 kubelet[1572]: E0517 00:39:48.812358 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:48.900507 kubelet[1572]: E0517 00:39:48.900453 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:48.912690 kubelet[1572]: E0517 00:39:48.912653 1572 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:39:48.925273 kubelet[1572]: I0517 00:39:48.925240 1572 policy_none.go:49] "None policy: Start" May 17 00:39:48.925320 kubelet[1572]: I0517 00:39:48.925279 1572 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:39:48.925320 kubelet[1572]: I0517 00:39:48.925293 1572 state_mem.go:35] "Initializing new in-memory state store" May 17 00:39:48.930832 systemd[1]: Created slice kubepods.slice. May 17 00:39:48.934685 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:39:48.937247 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:39:48.943649 kubelet[1572]: I0517 00:39:48.943622 1572 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:39:48.943776 kubelet[1572]: I0517 00:39:48.943753 1572 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:39:48.943834 kubelet[1572]: I0517 00:39:48.943782 1572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:39:48.944420 kubelet[1572]: I0517 00:39:48.943990 1572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:39:48.944488 kubelet[1572]: E0517 00:39:48.944465 1572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:39:48.944551 kubelet[1572]: E0517 00:39:48.944510 1572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:39:49.002651 kubelet[1572]: E0517 00:39:49.002556 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" May 17 00:39:49.045827 kubelet[1572]: I0517 00:39:49.045781 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:39:49.046256 kubelet[1572]: E0517 00:39:49.046211 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 17 00:39:49.120712 systemd[1]: Created slice kubepods-burstable-poda8b1185b4e66643ec5a77ccff8da4f91.slice. May 17 00:39:49.139578 kubelet[1572]: E0517 00:39:49.139539 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:49.141981 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 17 00:39:49.148572 kubelet[1572]: E0517 00:39:49.148525 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:49.150564 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 17 00:39:49.151994 kubelet[1572]: E0517 00:39:49.151950 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:49.202292 kubelet[1572]: I0517 00:39:49.202235 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:39:49.202292 kubelet[1572]: I0517 00:39:49.202276 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b1185b4e66643ec5a77ccff8da4f91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b1185b4e66643ec5a77ccff8da4f91\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:49.202292 kubelet[1572]: I0517 00:39:49.202298 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:49.202728 kubelet[1572]: I0517 00:39:49.202312 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:49.202728 kubelet[1572]: I0517 00:39:49.202349 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:49.202728 kubelet[1572]: I0517 00:39:49.202399 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:49.202728 kubelet[1572]: I0517 00:39:49.202426 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:49.202728 kubelet[1572]: I0517 00:39:49.202445 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b1185b4e66643ec5a77ccff8da4f91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b1185b4e66643ec5a77ccff8da4f91\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:49.202856 kubelet[1572]: I0517 00:39:49.202461 1572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b1185b4e66643ec5a77ccff8da4f91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8b1185b4e66643ec5a77ccff8da4f91\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:49.247685 kubelet[1572]: I0517 00:39:49.247627 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:39:49.248060 kubelet[1572]: E0517 00:39:49.248008 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 17 00:39:49.403575 kubelet[1572]: E0517 00:39:49.403540 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" May 17 00:39:49.440931 kubelet[1572]: E0517 00:39:49.440887 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:49.441616 env[1201]: time="2025-05-17T00:39:49.441568896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8b1185b4e66643ec5a77ccff8da4f91,Namespace:kube-system,Attempt:0,}" May 17 00:39:49.449771 kubelet[1572]: E0517 00:39:49.449744 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:49.450177 env[1201]: time="2025-05-17T00:39:49.450142308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 17 00:39:49.452402 kubelet[1572]: E0517 00:39:49.452370 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:49.452779 env[1201]: time="2025-05-17T00:39:49.452733936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 17 00:39:49.649784 kubelet[1572]: I0517 00:39:49.649736 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:39:49.650051 kubelet[1572]: E0517 00:39:49.650021 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 17 00:39:49.691596 kubelet[1572]: W0517 00:39:49.691489 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:49.691596 kubelet[1572]: E0517 00:39:49.691540 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:49.961071 kubelet[1572]: W0517 00:39:49.960913 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:49.961071 kubelet[1572]: E0517 00:39:49.960992 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:50.027476 kubelet[1572]: W0517 00:39:50.027399 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:50.027626 kubelet[1572]: E0517 00:39:50.027479 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:50.083255 kubelet[1572]: W0517 00:39:50.083213 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:50.083255 kubelet[1572]: E0517 00:39:50.083255 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:50.204473 kubelet[1572]: E0517 00:39:50.204421 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" May 17 00:39:50.246542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235379261.mount: Deactivated successfully. May 17 00:39:50.284074 env[1201]: time="2025-05-17T00:39:50.283968613Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.294458 env[1201]: time="2025-05-17T00:39:50.294409778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.301718 env[1201]: time="2025-05-17T00:39:50.301690682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.309601 env[1201]: time="2025-05-17T00:39:50.309546184Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.316487 env[1201]: time="2025-05-17T00:39:50.316448519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.325888 env[1201]: time="2025-05-17T00:39:50.325847307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.327460 env[1201]: time="2025-05-17T00:39:50.327428049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.342424 env[1201]: time="2025-05-17T00:39:50.342387947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.344888 env[1201]: time="2025-05-17T00:39:50.344839939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.356375 env[1201]: time="2025-05-17T00:39:50.356311829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.385986 env[1201]: time="2025-05-17T00:39:50.385213095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.390838 env[1201]: time="2025-05-17T00:39:50.390714423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:50.451772 kubelet[1572]: I0517 00:39:50.451714 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:39:50.452144 kubelet[1572]: E0517 00:39:50.452093 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 17 00:39:50.634657 kubelet[1572]: E0517 00:39:50.634494 1572 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18402998205ff8fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:39:48.77468083 +0000 UTC m=+0.620425078,LastTimestamp:2025-05-17 00:39:48.77468083 +0000 UTC m=+0.620425078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:39:50.917243 kubelet[1572]: E0517 00:39:50.917195 1572 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:51.145273 env[1201]: time="2025-05-17T00:39:51.145198399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:51.145273 env[1201]: time="2025-05-17T00:39:51.145273270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:51.145636 env[1201]: time="2025-05-17T00:39:51.145304463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:51.145636 env[1201]: time="2025-05-17T00:39:51.145499297Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcdbc7cbbbe215c4ced8b236aa752b40e15b9eacfeb97a4e4edd76fafc1dba22 pid=1614 runtime=io.containerd.runc.v2 May 17 00:39:51.159837 systemd[1]: Started cri-containerd-bcdbc7cbbbe215c4ced8b236aa752b40e15b9eacfeb97a4e4edd76fafc1dba22.scope. May 17 00:39:51.192520 env[1201]: time="2025-05-17T00:39:51.192176791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcdbc7cbbbe215c4ced8b236aa752b40e15b9eacfeb97a4e4edd76fafc1dba22\"" May 17 00:39:51.193197 kubelet[1572]: E0517 00:39:51.193171 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.194833 env[1201]: time="2025-05-17T00:39:51.194811814Z" level=info msg="CreateContainer within sandbox \"bcdbc7cbbbe215c4ced8b236aa752b40e15b9eacfeb97a4e4edd76fafc1dba22\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:39:51.238475 systemd[1]: run-containerd-runc-k8s.io-bcdbc7cbbbe215c4ced8b236aa752b40e15b9eacfeb97a4e4edd76fafc1dba22-runc.ceYoZ3.mount: Deactivated successfully. May 17 00:39:51.450874 env[1201]: time="2025-05-17T00:39:51.450744451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:51.450874 env[1201]: time="2025-05-17T00:39:51.450776436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:51.450874 env[1201]: time="2025-05-17T00:39:51.450785614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:51.451038 env[1201]: time="2025-05-17T00:39:51.450893032Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e803d00d4897f2574af54511fd116e8555371ea87ad03c43b1e8c73e1e61a98 pid=1654 runtime=io.containerd.runc.v2 May 17 00:39:51.466640 systemd[1]: Started cri-containerd-0e803d00d4897f2574af54511fd116e8555371ea87ad03c43b1e8c73e1e61a98.scope. May 17 00:39:51.497351 env[1201]: time="2025-05-17T00:39:51.497300167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e803d00d4897f2574af54511fd116e8555371ea87ad03c43b1e8c73e1e61a98\"" May 17 00:39:51.498084 kubelet[1572]: E0517 00:39:51.497958 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.499670 env[1201]: time="2025-05-17T00:39:51.499648360Z" level=info msg="CreateContainer within sandbox \"0e803d00d4897f2574af54511fd116e8555371ea87ad03c43b1e8c73e1e61a98\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:39:51.542131 env[1201]: time="2025-05-17T00:39:51.542055690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:51.542131 env[1201]: time="2025-05-17T00:39:51.542111604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:51.542131 env[1201]: time="2025-05-17T00:39:51.542123417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:51.542521 env[1201]: time="2025-05-17T00:39:51.542436932Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60e78964cbf44900a979374a6c482d6efd304e92b62f0dddbd275521cf80f819 pid=1698 runtime=io.containerd.runc.v2 May 17 00:39:51.553150 systemd[1]: Started cri-containerd-60e78964cbf44900a979374a6c482d6efd304e92b62f0dddbd275521cf80f819.scope. May 17 00:39:51.586010 env[1201]: time="2025-05-17T00:39:51.585955952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8b1185b4e66643ec5a77ccff8da4f91,Namespace:kube-system,Attempt:0,} returns sandbox id \"60e78964cbf44900a979374a6c482d6efd304e92b62f0dddbd275521cf80f819\"" May 17 00:39:51.586613 kubelet[1572]: E0517 00:39:51.586585 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:51.590060 env[1201]: time="2025-05-17T00:39:51.590009687Z" level=info msg="CreateContainer within sandbox \"60e78964cbf44900a979374a6c482d6efd304e92b62f0dddbd275521cf80f819\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:39:51.784865 kubelet[1572]: W0517 00:39:51.784724 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:51.784865 kubelet[1572]: E0517 00:39:51.784801 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:51.805859 kubelet[1572]: E0517 00:39:51.805811 1572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="3.2s" May 17 00:39:51.847524 kubelet[1572]: W0517 00:39:51.847461 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:51.847524 kubelet[1572]: E0517 00:39:51.847528 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:51.871355 env[1201]: time="2025-05-17T00:39:51.871292101Z" level=info msg="CreateContainer within sandbox \"bcdbc7cbbbe215c4ced8b236aa752b40e15b9eacfeb97a4e4edd76fafc1dba22\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ba4ddc353c8de236d8cd00df9d57090221922539b10cbe9d072c929f42688684\"" May 17 00:39:51.871967 env[1201]: time="2025-05-17T00:39:51.871946124Z" level=info msg="StartContainer for \"ba4ddc353c8de236d8cd00df9d57090221922539b10cbe9d072c929f42688684\"" May 17 00:39:51.886309 systemd[1]: Started cri-containerd-ba4ddc353c8de236d8cd00df9d57090221922539b10cbe9d072c929f42688684.scope. May 17 00:39:51.935896 env[1201]: time="2025-05-17T00:39:51.935828279Z" level=info msg="StartContainer for \"ba4ddc353c8de236d8cd00df9d57090221922539b10cbe9d072c929f42688684\" returns successfully" May 17 00:39:51.986146 kubelet[1572]: W0517 00:39:51.986065 1572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused May 17 00:39:51.986300 kubelet[1572]: E0517 00:39:51.986154 1572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" May 17 00:39:51.988480 env[1201]: time="2025-05-17T00:39:51.988428339Z" level=info msg="CreateContainer within sandbox \"0e803d00d4897f2574af54511fd116e8555371ea87ad03c43b1e8c73e1e61a98\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a7d33065d7c2542a57f9ae7f7e32bfc3d8e3ff4630f3ee7292b3725cd079afd\"" May 17 00:39:51.989017 env[1201]: time="2025-05-17T00:39:51.988985938Z" level=info msg="StartContainer for \"4a7d33065d7c2542a57f9ae7f7e32bfc3d8e3ff4630f3ee7292b3725cd079afd\"" May 17 00:39:52.000651 env[1201]: time="2025-05-17T00:39:52.000598732Z" level=info msg="CreateContainer within sandbox \"60e78964cbf44900a979374a6c482d6efd304e92b62f0dddbd275521cf80f819\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a25d0ea16deabe2a0e6e07da9d49b5881d817ae19794bf8e78d81f9f9efc8fee\"" May 17 00:39:52.001855 env[1201]: time="2025-05-17T00:39:52.001418108Z" level=info msg="StartContainer for \"a25d0ea16deabe2a0e6e07da9d49b5881d817ae19794bf8e78d81f9f9efc8fee\"" May 17 00:39:52.002838 systemd[1]: Started cri-containerd-4a7d33065d7c2542a57f9ae7f7e32bfc3d8e3ff4630f3ee7292b3725cd079afd.scope. May 17 00:39:52.018900 systemd[1]: Started cri-containerd-a25d0ea16deabe2a0e6e07da9d49b5881d817ae19794bf8e78d81f9f9efc8fee.scope. May 17 00:39:52.053534 kubelet[1572]: I0517 00:39:52.053416 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:39:52.054215 kubelet[1572]: E0517 00:39:52.053681 1572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" May 17 00:39:52.064172 env[1201]: time="2025-05-17T00:39:52.064125856Z" level=info msg="StartContainer for \"4a7d33065d7c2542a57f9ae7f7e32bfc3d8e3ff4630f3ee7292b3725cd079afd\" returns successfully" May 17 00:39:52.071667 env[1201]: time="2025-05-17T00:39:52.071636571Z" level=info msg="StartContainer for \"a25d0ea16deabe2a0e6e07da9d49b5881d817ae19794bf8e78d81f9f9efc8fee\" returns successfully" May 17 00:39:52.246275 systemd[1]: run-containerd-runc-k8s.io-0e803d00d4897f2574af54511fd116e8555371ea87ad03c43b1e8c73e1e61a98-runc.adwU5W.mount: Deactivated successfully. May 17 00:39:52.823333 kubelet[1572]: E0517 00:39:52.823285 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:52.823654 kubelet[1572]: E0517 00:39:52.823456 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:52.825055 kubelet[1572]: E0517 00:39:52.825036 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:52.825229 kubelet[1572]: E0517 00:39:52.825214 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:52.826549 kubelet[1572]: E0517 00:39:52.826521 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:52.826645 kubelet[1572]: E0517 00:39:52.826622 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:53.828357 kubelet[1572]: E0517 00:39:53.828326 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:53.828719 kubelet[1572]: E0517 00:39:53.828441 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:53.828719 kubelet[1572]: E0517 00:39:53.828465 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:53.828719 kubelet[1572]: E0517 00:39:53.828629 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:53.829113 kubelet[1572]: E0517 00:39:53.829075 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:53.829205 kubelet[1572]: E0517 00:39:53.829177 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:53.868078 kubelet[1572]: E0517 00:39:53.868034 1572 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 17 00:39:54.083112 update_engine[1192]: I0517 00:39:54.082934 1192 update_attempter.cc:509] Updating boot flags... May 17 00:39:54.220929 kubelet[1572]: E0517 00:39:54.220874 1572 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 17 00:39:54.667754 kubelet[1572]: E0517 00:39:54.667706 1572 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 17 00:39:54.829113 kubelet[1572]: E0517 00:39:54.829069 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:54.829468 kubelet[1572]: E0517 00:39:54.829181 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:54.829468 kubelet[1572]: E0517 00:39:54.829320 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:54.829468 kubelet[1572]: E0517 00:39:54.829446 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:55.009286 kubelet[1572]: E0517 00:39:55.009155 1572 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 00:39:55.255120 kubelet[1572]: I0517 00:39:55.255053 1572 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:39:55.260126 kubelet[1572]: I0517 00:39:55.260043 1572 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:39:55.260126 kubelet[1572]: E0517 00:39:55.260069 1572 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 17 00:39:55.276428 kubelet[1572]: E0517 00:39:55.276388 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:55.376659 kubelet[1572]: E0517 00:39:55.376606 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:55.477190 kubelet[1572]: E0517 00:39:55.477156 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:55.577897 kubelet[1572]: E0517 00:39:55.577746 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:55.678587 kubelet[1572]: E0517 00:39:55.678538 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:55.779704 kubelet[1572]: E0517 00:39:55.779649 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:55.830003 kubelet[1572]: E0517 00:39:55.829896 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:55.830003 kubelet[1572]: E0517 00:39:55.829997 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:55.880276 kubelet[1572]: E0517 00:39:55.880234 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:55.910855 kubelet[1572]: E0517 00:39:55.910820 1572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 17 00:39:55.910976 kubelet[1572]: E0517 00:39:55.910948 1572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:55.981327 kubelet[1572]: E0517 00:39:55.981294 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:56.082122 kubelet[1572]: E0517 00:39:56.082009 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:56.183653 kubelet[1572]: E0517 00:39:56.183615 1572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:56.200696 kubelet[1572]: I0517 00:39:56.200656 1572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:39:56.349204 kubelet[1572]: I0517 00:39:56.349084 1572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:39:56.353867 kubelet[1572]: I0517 00:39:56.353819 1572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:39:56.393600 systemd[1]: Reloading. May 17 00:39:56.454930 /usr/lib/systemd/system-generators/torcx-generator[1884]: time="2025-05-17T00:39:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:56.454960 /usr/lib/systemd/system-generators/torcx-generator[1884]: time="2025-05-17T00:39:56Z" level=info msg="torcx already run" May 17 00:39:56.512920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:56.512937 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:56.530817 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:56.619853 systemd[1]: Stopping kubelet.service... May 17 00:39:56.641486 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:39:56.641637 systemd[1]: Stopped kubelet.service. May 17 00:39:56.641683 systemd[1]: kubelet.service: Consumed 1.036s CPU time. May 17 00:39:56.643037 systemd[1]: Starting kubelet.service... May 17 00:39:56.724878 systemd[1]: Started kubelet.service. May 17 00:39:56.762139 kubelet[1930]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:56.762139 kubelet[1930]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:39:56.762139 kubelet[1930]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:56.762846 kubelet[1930]: I0517 00:39:56.762221 1930 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:39:56.771945 kubelet[1930]: I0517 00:39:56.771901 1930 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:39:56.771945 kubelet[1930]: I0517 00:39:56.771929 1930 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:39:56.772207 kubelet[1930]: I0517 00:39:56.772187 1930 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:39:56.773683 kubelet[1930]: I0517 00:39:56.773315 1930 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:39:56.778565 kubelet[1930]: I0517 00:39:56.777407 1930 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:39:56.782850 kubelet[1930]: E0517 00:39:56.782818 1930 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:39:56.782850 kubelet[1930]: I0517 00:39:56.782849 1930 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:39:56.786016 kubelet[1930]: I0517 00:39:56.785978 1930 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:39:56.786190 kubelet[1930]: I0517 00:39:56.786155 1930 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:39:56.786352 kubelet[1930]: I0517 00:39:56.786182 1930 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:39:56.786352 kubelet[1930]: I0517 00:39:56.786350 1930 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:39:56.786525 kubelet[1930]: I0517 00:39:56.786360 1930 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:39:56.786525 kubelet[1930]: I0517 00:39:56.786403 1930 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:56.786525 kubelet[1930]: I0517 00:39:56.786516 1930 kubelet.go:446] "Attempting to sync node with API server" May 17 00:39:56.786620 kubelet[1930]: I0517 00:39:56.786535 1930 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:39:56.786620 kubelet[1930]: I0517 00:39:56.786550 1930 kubelet.go:352] "Adding apiserver pod source" May 17 00:39:56.786620 kubelet[1930]: I0517 00:39:56.786558 1930 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:39:56.787522 kubelet[1930]: I0517 00:39:56.787498 1930 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:39:56.788497 kubelet[1930]: I0517 00:39:56.788453 1930 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:39:56.789166 kubelet[1930]: I0517 00:39:56.789151 1930 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:39:56.789274 kubelet[1930]: I0517 00:39:56.789259 1930 server.go:1287] "Started kubelet" May 17 00:39:56.791127 kubelet[1930]: I0517 00:39:56.790962 1930 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:39:56.791378 kubelet[1930]: I0517 00:39:56.791205 1930 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:39:56.791378 kubelet[1930]: I0517 00:39:56.791253 1930 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:39:56.791942 sudo[1946]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:39:56.792268 kubelet[1930]: I0517 00:39:56.792146 1930 server.go:479] "Adding debug handlers to kubelet server" May 17 00:39:56.792451 sudo[1946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:39:56.792546 kubelet[1930]: I0517 00:39:56.792456 1930 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:39:56.793278 kubelet[1930]: I0517 00:39:56.793262 1930 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:39:56.807075 kubelet[1930]: I0517 00:39:56.807037 1930 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:39:56.807404 kubelet[1930]: E0517 00:39:56.807379 1930 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:39:56.807464 kubelet[1930]: I0517 00:39:56.807435 1930 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:39:56.807580 kubelet[1930]: I0517 00:39:56.807561 1930 reconciler.go:26] "Reconciler: start to sync state" May 17 00:39:56.808413 kubelet[1930]: I0517 00:39:56.808390 1930 factory.go:221] Registration of the systemd container factory successfully May 17 00:39:56.808534 kubelet[1930]: I0517 00:39:56.808507 1930 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:39:56.813765 kubelet[1930]: I0517 00:39:56.813741 1930 factory.go:221] Registration of the containerd container factory successfully May 17 00:39:56.814172 kubelet[1930]: I0517 00:39:56.814142 1930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:39:56.814637 kubelet[1930]: E0517 00:39:56.814610 1930 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:39:56.817993 kubelet[1930]: I0517 00:39:56.817948 1930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:39:56.817993 kubelet[1930]: I0517 00:39:56.817995 1930 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:39:56.818233 kubelet[1930]: I0517 00:39:56.818016 1930 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:39:56.818233 kubelet[1930]: I0517 00:39:56.818024 1930 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:39:56.818233 kubelet[1930]: E0517 00:39:56.818071 1930 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:39:56.843201 kubelet[1930]: I0517 00:39:56.843171 1930 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:39:56.843201 kubelet[1930]: I0517 00:39:56.843193 1930 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:39:56.843379 kubelet[1930]: I0517 00:39:56.843239 1930 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:56.843430 kubelet[1930]: I0517 00:39:56.843400 1930 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:39:56.843430 kubelet[1930]: I0517 00:39:56.843413 1930 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:39:56.843511 kubelet[1930]: I0517 00:39:56.843435 1930 policy_none.go:49] "None policy: Start" May 17 00:39:56.843511 kubelet[1930]: I0517 00:39:56.843446 1930 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:39:56.843511 kubelet[1930]: I0517 00:39:56.843457 1930 state_mem.go:35] "Initializing new in-memory state store" May 17 00:39:56.843598 kubelet[1930]: I0517 00:39:56.843572 1930 state_mem.go:75] "Updated machine memory state" May 17 00:39:56.847411 kubelet[1930]: I0517 00:39:56.847385 1930 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:39:56.847539 kubelet[1930]: I0517 00:39:56.847524 1930 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:39:56.847582 kubelet[1930]: I0517 00:39:56.847539 1930 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:39:56.847990 kubelet[1930]: I0517 00:39:56.847972 1930 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:39:56.850765 kubelet[1930]: E0517 00:39:56.850359 1930 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:39:56.919006 kubelet[1930]: I0517 00:39:56.918955 1930 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:39:56.921221 kubelet[1930]: I0517 00:39:56.919343 1930 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:39:56.921221 kubelet[1930]: I0517 00:39:56.919590 1930 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 17 00:39:56.924869 kubelet[1930]: E0517 00:39:56.924813 1930 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 17 00:39:56.925136 kubelet[1930]: E0517 00:39:56.925115 1930 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 17 00:39:56.925297 kubelet[1930]: E0517 00:39:56.925256 1930 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:39:56.951202 kubelet[1930]: I0517 00:39:56.951178 1930 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 17 00:39:56.956082 kubelet[1930]: I0517 00:39:56.956057 1930 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 17 00:39:56.956280 kubelet[1930]: I0517 00:39:56.956263 1930 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 17 00:39:57.009196 kubelet[1930]: I0517 00:39:57.009162 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b1185b4e66643ec5a77ccff8da4f91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b1185b4e66643ec5a77ccff8da4f91\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:57.009196 kubelet[1930]: I0517 00:39:57.009194 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:57.009196 kubelet[1930]: I0517 00:39:57.009213 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:57.009400 kubelet[1930]: I0517 00:39:57.009231 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:57.009400 kubelet[1930]: I0517 00:39:57.009248 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b1185b4e66643ec5a77ccff8da4f91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b1185b4e66643ec5a77ccff8da4f91\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:57.009400 kubelet[1930]: I0517 00:39:57.009263 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b1185b4e66643ec5a77ccff8da4f91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8b1185b4e66643ec5a77ccff8da4f91\") " pod="kube-system/kube-apiserver-localhost" May 17 00:39:57.009400 kubelet[1930]: I0517 00:39:57.009276 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:57.009400 kubelet[1930]: I0517 00:39:57.009290 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:39:57.009517 kubelet[1930]: I0517 00:39:57.009305 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 17 00:39:57.225933 kubelet[1930]: E0517 00:39:57.225828 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.225933 kubelet[1930]: E0517 00:39:57.225828 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.225933 kubelet[1930]: E0517 00:39:57.225927 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.272298 sudo[1946]: pam_unix(sudo:session): session closed for user root May 17 00:39:57.787109 kubelet[1930]: I0517 00:39:57.787047 1930 apiserver.go:52] "Watching apiserver" May 17 00:39:57.808493 kubelet[1930]: I0517 00:39:57.808454 1930 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:39:57.829545 kubelet[1930]: I0517 00:39:57.829508 1930 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 17 00:39:57.829685 kubelet[1930]: I0517 00:39:57.829598 1930 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 17 00:39:57.829799 kubelet[1930]: E0517 00:39:57.829777 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.837856 kubelet[1930]: E0517 00:39:57.837801 1930 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 17 00:39:57.837856 kubelet[1930]: E0517 00:39:57.837818 1930 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:39:57.838020 kubelet[1930]: E0517 00:39:57.837984 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.838072 kubelet[1930]: E0517 00:39:57.837986 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:57.859215 kubelet[1930]: I0517 00:39:57.859147 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.859126214 podStartE2EDuration="1.859126214s" podCreationTimestamp="2025-05-17 00:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:39:57.848671024 +0000 UTC m=+1.120493618" watchObservedRunningTime="2025-05-17 00:39:57.859126214 +0000 UTC m=+1.130948808" May 17 00:39:57.866178 kubelet[1930]: I0517 00:39:57.866126 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.866092465 podStartE2EDuration="1.866092465s" podCreationTimestamp="2025-05-17 00:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:39:57.859464776 +0000 UTC m=+1.131287380" watchObservedRunningTime="2025-05-17 00:39:57.866092465 +0000 UTC m=+1.137915059" May 17 00:39:57.874071 kubelet[1930]: I0517 00:39:57.874013 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8739926709999999 podStartE2EDuration="1.873992671s" podCreationTimestamp="2025-05-17 00:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:39:57.86638667 +0000 UTC m=+1.138209264" watchObservedRunningTime="2025-05-17 00:39:57.873992671 +0000 UTC m=+1.145815265" May 17 00:39:58.740739 sudo[1298]: pam_unix(sudo:session): session closed for user root May 17 00:39:58.741868 sshd[1294]: pam_unix(sshd:session): session closed for user core May 17 00:39:58.743768 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:58908.service: Deactivated successfully. May 17 00:39:58.744477 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:39:58.744615 systemd[1]: session-5.scope: Consumed 3.945s CPU time. May 17 00:39:58.744970 systemd-logind[1189]: Session 5 logged out. Waiting for processes to exit. May 17 00:39:58.745529 systemd-logind[1189]: Removed session 5. May 17 00:39:58.832768 kubelet[1930]: E0517 00:39:58.831073 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:58.832768 kubelet[1930]: E0517 00:39:58.831878 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:59.831776 kubelet[1930]: E0517 00:39:59.831751 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:39:59.831976 kubelet[1930]: E0517 00:39:59.831863 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:00.740744 kubelet[1930]: I0517 00:40:00.740703 1930 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:40:00.741190 env[1201]: time="2025-05-17T00:40:00.741142385Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:40:00.741380 kubelet[1930]: I0517 00:40:00.741358 1930 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:40:01.569957 systemd[1]: Created slice kubepods-besteffort-pod30802f39_4c55_4aca_b4c0_e2ff498e740d.slice. May 17 00:40:01.639981 kubelet[1930]: I0517 00:40:01.639956 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30802f39-4c55-4aca-b4c0-e2ff498e740d-kube-proxy\") pod \"kube-proxy-vltr6\" (UID: \"30802f39-4c55-4aca-b4c0-e2ff498e740d\") " pod="kube-system/kube-proxy-vltr6" May 17 00:40:01.640067 kubelet[1930]: I0517 00:40:01.639987 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30802f39-4c55-4aca-b4c0-e2ff498e740d-xtables-lock\") pod \"kube-proxy-vltr6\" (UID: \"30802f39-4c55-4aca-b4c0-e2ff498e740d\") " pod="kube-system/kube-proxy-vltr6" May 17 00:40:01.640067 kubelet[1930]: I0517 00:40:01.640006 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30802f39-4c55-4aca-b4c0-e2ff498e740d-lib-modules\") pod \"kube-proxy-vltr6\" (UID: \"30802f39-4c55-4aca-b4c0-e2ff498e740d\") " pod="kube-system/kube-proxy-vltr6" May 17 00:40:01.640067 kubelet[1930]: I0517 00:40:01.640026 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2kkl\" (UniqueName: \"kubernetes.io/projected/30802f39-4c55-4aca-b4c0-e2ff498e740d-kube-api-access-q2kkl\") pod \"kube-proxy-vltr6\" (UID: \"30802f39-4c55-4aca-b4c0-e2ff498e740d\") " pod="kube-system/kube-proxy-vltr6" May 17 00:40:01.841989 systemd[1]: Created slice kubepods-burstable-poda71154ac_d7bc_4377_905d_b04e4476e2c6.slice. May 17 00:40:01.943766 kubelet[1930]: I0517 00:40:01.943702 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-lib-modules\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.943766 kubelet[1930]: I0517 00:40:01.943743 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77bm\" (UniqueName: \"kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-kube-api-access-n77bm\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.943766 kubelet[1930]: I0517 00:40:01.943760 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-etc-cni-netd\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944211 kubelet[1930]: I0517 00:40:01.943781 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-bpf-maps\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944211 kubelet[1930]: I0517 00:40:01.943795 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-hubble-tls\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944211 kubelet[1930]: I0517 00:40:01.943811 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-hostproc\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944211 kubelet[1930]: I0517 00:40:01.943845 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-cgroup\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944211 kubelet[1930]: I0517 00:40:01.943870 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-net\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944211 kubelet[1930]: I0517 00:40:01.943893 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-config-path\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944432 kubelet[1930]: I0517 00:40:01.943906 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-run\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944432 kubelet[1930]: I0517 00:40:01.943922 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cni-path\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944432 kubelet[1930]: I0517 00:40:01.943934 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-xtables-lock\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944432 kubelet[1930]: I0517 00:40:01.943946 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a71154ac-d7bc-4377-905d-b04e4476e2c6-clustermesh-secrets\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:01.944432 kubelet[1930]: I0517 00:40:01.943961 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-kernel\") pod \"cilium-wl7nt\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " pod="kube-system/cilium-wl7nt" May 17 00:40:02.044484 kubelet[1930]: I0517 00:40:02.044457 1930 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:40:02.140495 kubelet[1930]: E0517 00:40:02.140467 1930 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:40:02.140688 kubelet[1930]: E0517 00:40:02.140674 1930 projected.go:194] Error preparing data for projected volume kube-api-access-n77bm for pod kube-system/cilium-wl7nt: configmap "kube-root-ca.crt" not found May 17 00:40:02.140841 kubelet[1930]: E0517 00:40:02.140827 1930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-kube-api-access-n77bm podName:a71154ac-d7bc-4377-905d-b04e4476e2c6 nodeName:}" failed. No retries permitted until 2025-05-17 00:40:02.640784361 +0000 UTC m=+5.912607015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n77bm" (UniqueName: "kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-kube-api-access-n77bm") pod "cilium-wl7nt" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6") : configmap "kube-root-ca.crt" not found May 17 00:40:02.141277 kubelet[1930]: E0517 00:40:02.141242 1930 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:40:02.141277 kubelet[1930]: E0517 00:40:02.141280 1930 projected.go:194] Error preparing data for projected volume kube-api-access-q2kkl for pod kube-system/kube-proxy-vltr6: configmap "kube-root-ca.crt" not found May 17 00:40:02.141354 kubelet[1930]: E0517 00:40:02.141322 1930 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30802f39-4c55-4aca-b4c0-e2ff498e740d-kube-api-access-q2kkl podName:30802f39-4c55-4aca-b4c0-e2ff498e740d nodeName:}" failed. No retries permitted until 2025-05-17 00:40:02.641308159 +0000 UTC m=+5.913130753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q2kkl" (UniqueName: "kubernetes.io/projected/30802f39-4c55-4aca-b4c0-e2ff498e740d-kube-api-access-q2kkl") pod "kube-proxy-vltr6" (UID: "30802f39-4c55-4aca-b4c0-e2ff498e740d") : configmap "kube-root-ca.crt" not found May 17 00:40:02.207568 systemd[1]: Created slice kubepods-besteffort-pod0d315840_9b2e_4732_81a9_8d50fbd1700e.slice. May 17 00:40:02.245693 kubelet[1930]: I0517 00:40:02.245620 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp98\" (UniqueName: \"kubernetes.io/projected/0d315840-9b2e-4732-81a9-8d50fbd1700e-kube-api-access-2mp98\") pod \"cilium-operator-6c4d7847fc-dksm8\" (UID: \"0d315840-9b2e-4732-81a9-8d50fbd1700e\") " pod="kube-system/cilium-operator-6c4d7847fc-dksm8" May 17 00:40:02.245693 kubelet[1930]: I0517 00:40:02.245693 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d315840-9b2e-4732-81a9-8d50fbd1700e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dksm8\" (UID: \"0d315840-9b2e-4732-81a9-8d50fbd1700e\") " pod="kube-system/cilium-operator-6c4d7847fc-dksm8" May 17 00:40:02.510895 kubelet[1930]: E0517 00:40:02.510671 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:02.511304 env[1201]: time="2025-05-17T00:40:02.511246849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dksm8,Uid:0d315840-9b2e-4732-81a9-8d50fbd1700e,Namespace:kube-system,Attempt:0,}" May 17 00:40:02.553908 kubelet[1930]: E0517 00:40:02.553872 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:02.739207 env[1201]: time="2025-05-17T00:40:02.739139960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:02.739207 env[1201]: time="2025-05-17T00:40:02.739180960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:02.739207 env[1201]: time="2025-05-17T00:40:02.739191150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:02.739398 env[1201]: time="2025-05-17T00:40:02.739321747Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11 pid=2024 runtime=io.containerd.runc.v2 May 17 00:40:02.745195 kubelet[1930]: E0517 00:40:02.745167 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:02.746833 env[1201]: time="2025-05-17T00:40:02.746793394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wl7nt,Uid:a71154ac-d7bc-4377-905d-b04e4476e2c6,Namespace:kube-system,Attempt:0,}" May 17 00:40:02.751744 systemd[1]: Started cri-containerd-972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11.scope. May 17 00:40:02.782701 kubelet[1930]: E0517 00:40:02.782598 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:02.784846 env[1201]: time="2025-05-17T00:40:02.783091571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vltr6,Uid:30802f39-4c55-4aca-b4c0-e2ff498e740d,Namespace:kube-system,Attempt:0,}" May 17 00:40:02.788494 env[1201]: time="2025-05-17T00:40:02.788469609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dksm8,Uid:0d315840-9b2e-4732-81a9-8d50fbd1700e,Namespace:kube-system,Attempt:0,} returns sandbox id \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\"" May 17 00:40:02.789227 kubelet[1930]: E0517 00:40:02.789026 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:02.789904 env[1201]: time="2025-05-17T00:40:02.789886350Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:40:02.837523 kubelet[1930]: E0517 00:40:02.837489 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:03.391015 env[1201]: time="2025-05-17T00:40:03.390935046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:03.391015 env[1201]: time="2025-05-17T00:40:03.390989122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:03.391015 env[1201]: time="2025-05-17T00:40:03.391003370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:03.391320 env[1201]: time="2025-05-17T00:40:03.391157392Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e pid=2065 runtime=io.containerd.runc.v2 May 17 00:40:03.406480 env[1201]: time="2025-05-17T00:40:03.406211920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:03.406480 env[1201]: time="2025-05-17T00:40:03.406266587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:03.406480 env[1201]: time="2025-05-17T00:40:03.406280965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:03.406882 env[1201]: time="2025-05-17T00:40:03.406806475Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18844fa355de8fe692c33b3685d03cf66a6502649adb153e0bf9d4f8bf940791 pid=2089 runtime=io.containerd.runc.v2 May 17 00:40:03.412067 systemd[1]: Started cri-containerd-2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e.scope. May 17 00:40:03.426993 systemd[1]: Started cri-containerd-18844fa355de8fe692c33b3685d03cf66a6502649adb153e0bf9d4f8bf940791.scope. May 17 00:40:03.439618 env[1201]: time="2025-05-17T00:40:03.438587588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wl7nt,Uid:a71154ac-d7bc-4377-905d-b04e4476e2c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\"" May 17 00:40:03.439815 kubelet[1930]: E0517 00:40:03.439311 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:03.454706 env[1201]: time="2025-05-17T00:40:03.454620103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vltr6,Uid:30802f39-4c55-4aca-b4c0-e2ff498e740d,Namespace:kube-system,Attempt:0,} returns sandbox id \"18844fa355de8fe692c33b3685d03cf66a6502649adb153e0bf9d4f8bf940791\"" May 17 00:40:03.455653 kubelet[1930]: E0517 00:40:03.455614 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:03.458042 env[1201]: time="2025-05-17T00:40:03.457956921Z" level=info msg="CreateContainer within sandbox \"18844fa355de8fe692c33b3685d03cf66a6502649adb153e0bf9d4f8bf940791\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:40:03.613527 env[1201]: time="2025-05-17T00:40:03.613463596Z" level=info msg="CreateContainer within sandbox \"18844fa355de8fe692c33b3685d03cf66a6502649adb153e0bf9d4f8bf940791\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a2186c601fe0cd204fd75ff62286a854d61bee5372c28e548411b07777d5d20\"" May 17 00:40:03.614411 env[1201]: time="2025-05-17T00:40:03.614150101Z" level=info msg="StartContainer for \"0a2186c601fe0cd204fd75ff62286a854d61bee5372c28e548411b07777d5d20\"" May 17 00:40:03.632574 systemd[1]: Started cri-containerd-0a2186c601fe0cd204fd75ff62286a854d61bee5372c28e548411b07777d5d20.scope. May 17 00:40:03.663317 env[1201]: time="2025-05-17T00:40:03.663169591Z" level=info msg="StartContainer for \"0a2186c601fe0cd204fd75ff62286a854d61bee5372c28e548411b07777d5d20\" returns successfully" May 17 00:40:03.841068 kubelet[1930]: E0517 00:40:03.841035 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:04.050687 systemd[1]: run-containerd-runc-k8s.io-2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e-runc.GmsoFd.mount: Deactivated successfully. May 17 00:40:05.024658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896007703.mount: Deactivated successfully. May 17 00:40:06.015282 env[1201]: time="2025-05-17T00:40:06.015204877Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:06.017488 env[1201]: time="2025-05-17T00:40:06.017428262Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:06.019292 env[1201]: time="2025-05-17T00:40:06.019255374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:06.019966 env[1201]: time="2025-05-17T00:40:06.019915511Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:40:06.021196 env[1201]: time="2025-05-17T00:40:06.021161130Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:40:06.022761 env[1201]: time="2025-05-17T00:40:06.022698487Z" level=info msg="CreateContainer within sandbox \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:40:06.036263 env[1201]: time="2025-05-17T00:40:06.036214911Z" level=info msg="CreateContainer within sandbox \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\"" May 17 00:40:06.036869 env[1201]: time="2025-05-17T00:40:06.036816724Z" level=info msg="StartContainer for \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\"" May 17 00:40:06.056983 systemd[1]: Started cri-containerd-e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac.scope. May 17 00:40:06.083827 env[1201]: time="2025-05-17T00:40:06.083745542Z" level=info msg="StartContainer for \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\" returns successfully" May 17 00:40:06.849324 kubelet[1930]: E0517 00:40:06.849287 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:06.924346 kubelet[1930]: I0517 00:40:06.924023 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vltr6" podStartSLOduration=5.923996465 podStartE2EDuration="5.923996465s" podCreationTimestamp="2025-05-17 00:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:40:03.852462928 +0000 UTC m=+7.124285522" watchObservedRunningTime="2025-05-17 00:40:06.923996465 +0000 UTC m=+10.195819059" May 17 00:40:07.779836 kubelet[1930]: E0517 00:40:07.779765 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:07.805943 kubelet[1930]: I0517 00:40:07.805880 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dksm8" podStartSLOduration=2.574526052 podStartE2EDuration="5.805857876s" podCreationTimestamp="2025-05-17 00:40:02 +0000 UTC" firstStartedPulling="2025-05-17 00:40:02.789595049 +0000 UTC m=+6.061417643" lastFinishedPulling="2025-05-17 00:40:06.020926873 +0000 UTC m=+9.292749467" observedRunningTime="2025-05-17 00:40:06.92431685 +0000 UTC m=+10.196139444" watchObservedRunningTime="2025-05-17 00:40:07.805857876 +0000 UTC m=+11.077680470" May 17 00:40:07.851304 kubelet[1930]: E0517 00:40:07.851249 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:07.852463 kubelet[1930]: E0517 00:40:07.851896 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:08.187802 kubelet[1930]: E0517 00:40:08.187760 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:12.772360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840711877.mount: Deactivated successfully. May 17 00:40:19.148497 env[1201]: time="2025-05-17T00:40:19.148398388Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:19.181530 env[1201]: time="2025-05-17T00:40:19.181478732Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:19.198486 env[1201]: time="2025-05-17T00:40:19.198413303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:19.199320 env[1201]: time="2025-05-17T00:40:19.199278967Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:40:19.201646 env[1201]: time="2025-05-17T00:40:19.201610233Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:40:19.282917 env[1201]: time="2025-05-17T00:40:19.282833067Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\"" May 17 00:40:19.283592 env[1201]: time="2025-05-17T00:40:19.283527554Z" level=info msg="StartContainer for \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\"" May 17 00:40:19.310387 systemd[1]: Started cri-containerd-1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5.scope. May 17 00:40:19.312454 systemd[1]: run-containerd-runc-k8s.io-1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5-runc.HKQ15Q.mount: Deactivated successfully. May 17 00:40:19.367811 env[1201]: time="2025-05-17T00:40:19.367729228Z" level=info msg="StartContainer for \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\" returns successfully" May 17 00:40:19.378436 systemd[1]: cri-containerd-1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5.scope: Deactivated successfully. May 17 00:40:19.523053 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:39914.service. May 17 00:40:19.775527 sshd[2394]: Accepted publickey for core from 10.0.0.1 port 39914 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:19.776947 env[1201]: time="2025-05-17T00:40:19.776892217Z" level=info msg="shim disconnected" id=1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5 May 17 00:40:19.776942 sshd[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:19.777233 env[1201]: time="2025-05-17T00:40:19.776951431Z" level=warning msg="cleaning up after shim disconnected" id=1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5 namespace=k8s.io May 17 00:40:19.777233 env[1201]: time="2025-05-17T00:40:19.776964517Z" level=info msg="cleaning up dead shim" May 17 00:40:19.782374 systemd[1]: Started session-6.scope. May 17 00:40:19.783026 systemd-logind[1189]: New session 6 of user core. May 17 00:40:19.785773 env[1201]: time="2025-05-17T00:40:19.785724890Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2396 runtime=io.containerd.runc.v2\n" May 17 00:40:19.872031 kubelet[1930]: E0517 00:40:19.871787 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:19.881438 env[1201]: time="2025-05-17T00:40:19.881384050Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:40:19.923300 sshd[2394]: pam_unix(sshd:session): session closed for user core May 17 00:40:19.925829 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:39914.service: Deactivated successfully. May 17 00:40:19.926536 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:40:19.927334 systemd-logind[1189]: Session 6 logged out. Waiting for processes to exit. May 17 00:40:19.928000 systemd-logind[1189]: Removed session 6. May 17 00:40:20.036487 env[1201]: time="2025-05-17T00:40:20.036344063Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\"" May 17 00:40:20.036948 env[1201]: time="2025-05-17T00:40:20.036919880Z" level=info msg="StartContainer for \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\"" May 17 00:40:20.056691 systemd[1]: Started cri-containerd-91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be.scope. May 17 00:40:20.098188 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:40:20.098422 systemd[1]: Stopped systemd-sysctl.service. May 17 00:40:20.098629 systemd[1]: Stopping systemd-sysctl.service... May 17 00:40:20.100238 systemd[1]: Starting systemd-sysctl.service... May 17 00:40:20.102034 systemd[1]: cri-containerd-91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be.scope: Deactivated successfully. May 17 00:40:20.115821 env[1201]: time="2025-05-17T00:40:20.115762308Z" level=info msg="StartContainer for \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\" returns successfully" May 17 00:40:20.121704 systemd[1]: Finished systemd-sysctl.service. May 17 00:40:20.209473 env[1201]: time="2025-05-17T00:40:20.209416875Z" level=info msg="shim disconnected" id=91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be May 17 00:40:20.209473 env[1201]: time="2025-05-17T00:40:20.209475478Z" level=warning msg="cleaning up after shim disconnected" id=91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be namespace=k8s.io May 17 00:40:20.209878 env[1201]: time="2025-05-17T00:40:20.209487492Z" level=info msg="cleaning up dead shim" May 17 00:40:20.215958 env[1201]: time="2025-05-17T00:40:20.215896984Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2471 runtime=io.containerd.runc.v2\n" May 17 00:40:20.241292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5-rootfs.mount: Deactivated successfully. May 17 00:40:20.874063 kubelet[1930]: E0517 00:40:20.874033 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:20.875661 env[1201]: time="2025-05-17T00:40:20.875620766Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:40:21.297885 env[1201]: time="2025-05-17T00:40:21.297817681Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\"" May 17 00:40:21.298400 env[1201]: time="2025-05-17T00:40:21.298352888Z" level=info msg="StartContainer for \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\"" May 17 00:40:21.316490 systemd[1]: Started cri-containerd-0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898.scope. May 17 00:40:21.338792 systemd[1]: cri-containerd-0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898.scope: Deactivated successfully. May 17 00:40:21.357449 env[1201]: time="2025-05-17T00:40:21.357376864Z" level=info msg="StartContainer for \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\" returns successfully" May 17 00:40:21.371012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898-rootfs.mount: Deactivated successfully. May 17 00:40:21.420645 env[1201]: time="2025-05-17T00:40:21.420591833Z" level=info msg="shim disconnected" id=0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898 May 17 00:40:21.420645 env[1201]: time="2025-05-17T00:40:21.420642200Z" level=warning msg="cleaning up after shim disconnected" id=0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898 namespace=k8s.io May 17 00:40:21.420645 env[1201]: time="2025-05-17T00:40:21.420651719Z" level=info msg="cleaning up dead shim" May 17 00:40:21.427681 env[1201]: time="2025-05-17T00:40:21.427618846Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2526 runtime=io.containerd.runc.v2\n" May 17 00:40:21.877518 kubelet[1930]: E0517 00:40:21.877481 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:21.879288 env[1201]: time="2025-05-17T00:40:21.879238261Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:40:22.070532 env[1201]: time="2025-05-17T00:40:22.070458240Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\"" May 17 00:40:22.071398 env[1201]: time="2025-05-17T00:40:22.071290177Z" level=info msg="StartContainer for \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\"" May 17 00:40:22.086503 systemd[1]: Started cri-containerd-c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5.scope. May 17 00:40:22.109037 systemd[1]: cri-containerd-c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5.scope: Deactivated successfully. May 17 00:40:22.168192 env[1201]: time="2025-05-17T00:40:22.168129531Z" level=info msg="StartContainer for \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\" returns successfully" May 17 00:40:22.261000 env[1201]: time="2025-05-17T00:40:22.260901995Z" level=info msg="shim disconnected" id=c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5 May 17 00:40:22.261000 env[1201]: time="2025-05-17T00:40:22.260988762Z" level=warning msg="cleaning up after shim disconnected" id=c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5 namespace=k8s.io May 17 00:40:22.261000 env[1201]: time="2025-05-17T00:40:22.261001727Z" level=info msg="cleaning up dead shim" May 17 00:40:22.268432 env[1201]: time="2025-05-17T00:40:22.268376259Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2581 runtime=io.containerd.runc.v2\n" May 17 00:40:22.305628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298494608.mount: Deactivated successfully. May 17 00:40:22.880988 kubelet[1930]: E0517 00:40:22.880955 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:22.882687 env[1201]: time="2025-05-17T00:40:22.882583778Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:40:23.485485 env[1201]: time="2025-05-17T00:40:23.485414961Z" level=info msg="CreateContainer within sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\"" May 17 00:40:23.486046 env[1201]: time="2025-05-17T00:40:23.486009663Z" level=info msg="StartContainer for \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\"" May 17 00:40:23.503084 systemd[1]: Started cri-containerd-aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c.scope. May 17 00:40:23.612946 env[1201]: time="2025-05-17T00:40:23.612860383Z" level=info msg="StartContainer for \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\" returns successfully" May 17 00:40:23.750506 kubelet[1930]: I0517 00:40:23.750407 1930 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:40:23.887265 kubelet[1930]: E0517 00:40:23.887219 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:24.088770 kubelet[1930]: I0517 00:40:24.088622 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wl7nt" podStartSLOduration=7.329285226 podStartE2EDuration="23.088599738s" podCreationTimestamp="2025-05-17 00:40:01 +0000 UTC" firstStartedPulling="2025-05-17 00:40:03.44107379 +0000 UTC m=+6.712896384" lastFinishedPulling="2025-05-17 00:40:19.200388302 +0000 UTC m=+22.472210896" observedRunningTime="2025-05-17 00:40:24.063901706 +0000 UTC m=+27.335724300" watchObservedRunningTime="2025-05-17 00:40:24.088599738 +0000 UTC m=+27.360422362" May 17 00:40:24.093915 systemd[1]: Created slice kubepods-burstable-pod6bd520a0_0048_4ee2_abcd_308512d50284.slice. May 17 00:40:24.201180 kubelet[1930]: I0517 00:40:24.201092 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd520a0-0048-4ee2-abcd-308512d50284-config-volume\") pod \"coredns-668d6bf9bc-58bkn\" (UID: \"6bd520a0-0048-4ee2-abcd-308512d50284\") " pod="kube-system/coredns-668d6bf9bc-58bkn" May 17 00:40:24.201180 kubelet[1930]: I0517 00:40:24.201168 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6msn\" (UniqueName: \"kubernetes.io/projected/6bd520a0-0048-4ee2-abcd-308512d50284-kube-api-access-v6msn\") pod \"coredns-668d6bf9bc-58bkn\" (UID: \"6bd520a0-0048-4ee2-abcd-308512d50284\") " pod="kube-system/coredns-668d6bf9bc-58bkn" May 17 00:40:24.222515 systemd[1]: Created slice kubepods-burstable-pod91fd76d9_47c1_4888_9c18_8447a0f564a8.slice. May 17 00:40:24.302119 kubelet[1930]: I0517 00:40:24.302021 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxdcm\" (UniqueName: \"kubernetes.io/projected/91fd76d9-47c1-4888-9c18-8447a0f564a8-kube-api-access-zxdcm\") pod \"coredns-668d6bf9bc-q2zdn\" (UID: \"91fd76d9-47c1-4888-9c18-8447a0f564a8\") " pod="kube-system/coredns-668d6bf9bc-q2zdn" May 17 00:40:24.302274 kubelet[1930]: I0517 00:40:24.302135 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91fd76d9-47c1-4888-9c18-8447a0f564a8-config-volume\") pod \"coredns-668d6bf9bc-q2zdn\" (UID: \"91fd76d9-47c1-4888-9c18-8447a0f564a8\") " pod="kube-system/coredns-668d6bf9bc-q2zdn" May 17 00:40:24.396423 kubelet[1930]: E0517 00:40:24.396368 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:24.397151 env[1201]: time="2025-05-17T00:40:24.397074050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58bkn,Uid:6bd520a0-0048-4ee2-abcd-308512d50284,Namespace:kube-system,Attempt:0,}" May 17 00:40:24.525890 kubelet[1930]: E0517 00:40:24.525840 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:24.526468 env[1201]: time="2025-05-17T00:40:24.526429556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q2zdn,Uid:91fd76d9-47c1-4888-9c18-8447a0f564a8,Namespace:kube-system,Attempt:0,}" May 17 00:40:24.888899 kubelet[1930]: E0517 00:40:24.888869 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:24.928170 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:55738.service. May 17 00:40:24.971357 sshd[2771]: Accepted publickey for core from 10.0.0.1 port 55738 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:24.972846 sshd[2771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:24.977016 systemd-logind[1189]: New session 7 of user core. May 17 00:40:24.978126 systemd[1]: Started session-7.scope. May 17 00:40:25.100550 sshd[2771]: pam_unix(sshd:session): session closed for user core May 17 00:40:25.102553 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:55738.service: Deactivated successfully. May 17 00:40:25.103263 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:40:25.103844 systemd-logind[1189]: Session 7 logged out. Waiting for processes to exit. May 17 00:40:25.104592 systemd-logind[1189]: Removed session 7. May 17 00:40:25.608610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:40:25.609331 systemd-networkd[1024]: cilium_host: Link UP May 17 00:40:25.609456 systemd-networkd[1024]: cilium_net: Link UP May 17 00:40:25.609460 systemd-networkd[1024]: cilium_net: Gained carrier May 17 00:40:25.609666 systemd-networkd[1024]: cilium_host: Gained carrier May 17 00:40:25.609909 systemd-networkd[1024]: cilium_host: Gained IPv6LL May 17 00:40:25.690047 systemd-networkd[1024]: cilium_vxlan: Link UP May 17 00:40:25.690054 systemd-networkd[1024]: cilium_vxlan: Gained carrier May 17 00:40:25.875136 kernel: NET: Registered PF_ALG protocol family May 17 00:40:25.891065 kubelet[1930]: E0517 00:40:25.890752 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:25.910242 systemd-networkd[1024]: cilium_net: Gained IPv6LL May 17 00:40:26.394855 systemd-networkd[1024]: lxc_health: Link UP May 17 00:40:26.405415 systemd-networkd[1024]: lxc_health: Gained carrier May 17 00:40:26.406124 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:40:26.561944 systemd-networkd[1024]: lxc4ebf17407d11: Link UP May 17 00:40:26.572133 kernel: eth0: renamed from tmp02cc2 May 17 00:40:26.579032 systemd-networkd[1024]: lxc4ebf17407d11: Gained carrier May 17 00:40:26.579173 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4ebf17407d11: link becomes ready May 17 00:40:26.726349 systemd-networkd[1024]: cilium_vxlan: Gained IPv6LL May 17 00:40:26.891403 kubelet[1930]: E0517 00:40:26.891369 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:26.954555 systemd-networkd[1024]: lxc0b8c5205bfa0: Link UP May 17 00:40:26.961136 kernel: eth0: renamed from tmpfe7af May 17 00:40:26.972053 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:40:26.972185 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0b8c5205bfa0: link becomes ready May 17 00:40:26.972335 systemd-networkd[1024]: lxc0b8c5205bfa0: Gained carrier May 17 00:40:27.761739 systemd-networkd[1024]: lxc4ebf17407d11: Gained IPv6LL May 17 00:40:27.892794 kubelet[1930]: E0517 00:40:27.892760 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:28.134257 systemd-networkd[1024]: lxc0b8c5205bfa0: Gained IPv6LL May 17 00:40:28.198303 systemd-networkd[1024]: lxc_health: Gained IPv6LL May 17 00:40:28.894403 kubelet[1930]: E0517 00:40:28.894366 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:30.104811 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:55748.service. May 17 00:40:30.172403 sshd[3169]: Accepted publickey for core from 10.0.0.1 port 55748 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:30.173685 sshd[3169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:30.178947 systemd[1]: Started session-8.scope. May 17 00:40:30.180778 systemd-logind[1189]: New session 8 of user core. May 17 00:40:30.302336 env[1201]: time="2025-05-17T00:40:30.302259493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:30.302748 env[1201]: time="2025-05-17T00:40:30.302720576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:30.302871 env[1201]: time="2025-05-17T00:40:30.302845274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:30.303150 env[1201]: time="2025-05-17T00:40:30.303119559Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02cc2ccd65f9c4f4d07b853b0215208b6a4c5aa2dbef439f6ba5a54122b0d4c2 pid=3190 runtime=io.containerd.runc.v2 May 17 00:40:30.308301 sshd[3169]: pam_unix(sshd:session): session closed for user core May 17 00:40:30.310495 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:55748.service: Deactivated successfully. May 17 00:40:30.311245 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:40:30.311792 systemd-logind[1189]: Session 8 logged out. Waiting for processes to exit. May 17 00:40:30.312539 systemd-logind[1189]: Removed session 8. May 17 00:40:30.321983 systemd[1]: run-containerd-runc-k8s.io-02cc2ccd65f9c4f4d07b853b0215208b6a4c5aa2dbef439f6ba5a54122b0d4c2-runc.sbpsOV.mount: Deactivated successfully. May 17 00:40:30.323339 systemd[1]: Started cri-containerd-02cc2ccd65f9c4f4d07b853b0215208b6a4c5aa2dbef439f6ba5a54122b0d4c2.scope. May 17 00:40:30.333520 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:30.356965 env[1201]: time="2025-05-17T00:40:30.356409942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q2zdn,Uid:91fd76d9-47c1-4888-9c18-8447a0f564a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"02cc2ccd65f9c4f4d07b853b0215208b6a4c5aa2dbef439f6ba5a54122b0d4c2\"" May 17 00:40:30.360296 kubelet[1930]: E0517 00:40:30.360168 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:30.369144 env[1201]: time="2025-05-17T00:40:30.369106643Z" level=info msg="CreateContainer within sandbox \"02cc2ccd65f9c4f4d07b853b0215208b6a4c5aa2dbef439f6ba5a54122b0d4c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:40:30.370271 env[1201]: time="2025-05-17T00:40:30.369446703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:30.370271 env[1201]: time="2025-05-17T00:40:30.369489145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:30.370271 env[1201]: time="2025-05-17T00:40:30.369498973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:30.370478 env[1201]: time="2025-05-17T00:40:30.370225153Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe7af536dd70a7146e3c50d80472fe057575a3d6a3e28be7ae6b6023770c4f4e pid=3233 runtime=io.containerd.runc.v2 May 17 00:40:30.384418 systemd[1]: Started cri-containerd-fe7af536dd70a7146e3c50d80472fe057575a3d6a3e28be7ae6b6023770c4f4e.scope. May 17 00:40:30.396227 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:30.419275 env[1201]: time="2025-05-17T00:40:30.419226037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58bkn,Uid:6bd520a0-0048-4ee2-abcd-308512d50284,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe7af536dd70a7146e3c50d80472fe057575a3d6a3e28be7ae6b6023770c4f4e\"" May 17 00:40:30.420085 kubelet[1930]: E0517 00:40:30.420054 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:30.421744 env[1201]: time="2025-05-17T00:40:30.421717706Z" level=info msg="CreateContainer within sandbox \"fe7af536dd70a7146e3c50d80472fe057575a3d6a3e28be7ae6b6023770c4f4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:40:30.605000 env[1201]: time="2025-05-17T00:40:30.604923465Z" level=info msg="CreateContainer within sandbox \"02cc2ccd65f9c4f4d07b853b0215208b6a4c5aa2dbef439f6ba5a54122b0d4c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b047abfa2901a2f263856f90e0a9f6405b254b8fc7b807fa38629f20097c169a\"" May 17 00:40:30.605646 env[1201]: time="2025-05-17T00:40:30.605614237Z" level=info msg="StartContainer for \"b047abfa2901a2f263856f90e0a9f6405b254b8fc7b807fa38629f20097c169a\"" May 17 00:40:30.611895 env[1201]: time="2025-05-17T00:40:30.611758837Z" level=info msg="CreateContainer within sandbox \"fe7af536dd70a7146e3c50d80472fe057575a3d6a3e28be7ae6b6023770c4f4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce13287554c4f279a16fad92592803d35fdec41d5098d74836f060a74ca2cfa3\"" May 17 00:40:30.612496 env[1201]: time="2025-05-17T00:40:30.612471631Z" level=info msg="StartContainer for \"ce13287554c4f279a16fad92592803d35fdec41d5098d74836f060a74ca2cfa3\"" May 17 00:40:30.619942 systemd[1]: Started cri-containerd-b047abfa2901a2f263856f90e0a9f6405b254b8fc7b807fa38629f20097c169a.scope. May 17 00:40:30.634771 systemd[1]: Started cri-containerd-ce13287554c4f279a16fad92592803d35fdec41d5098d74836f060a74ca2cfa3.scope. May 17 00:40:30.654379 env[1201]: time="2025-05-17T00:40:30.654338211Z" level=info msg="StartContainer for \"b047abfa2901a2f263856f90e0a9f6405b254b8fc7b807fa38629f20097c169a\" returns successfully" May 17 00:40:30.662771 env[1201]: time="2025-05-17T00:40:30.662710595Z" level=info msg="StartContainer for \"ce13287554c4f279a16fad92592803d35fdec41d5098d74836f060a74ca2cfa3\" returns successfully" May 17 00:40:30.904232 kubelet[1930]: E0517 00:40:30.904193 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:30.904232 kubelet[1930]: E0517 00:40:30.904205 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:30.926872 kubelet[1930]: I0517 00:40:30.926798 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-58bkn" podStartSLOduration=28.92676983 podStartE2EDuration="28.92676983s" podCreationTimestamp="2025-05-17 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:40:30.924676644 +0000 UTC m=+34.196499248" watchObservedRunningTime="2025-05-17 00:40:30.92676983 +0000 UTC m=+34.198592424" May 17 00:40:30.935948 kubelet[1930]: I0517 00:40:30.935886 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q2zdn" podStartSLOduration=28.93586646 podStartE2EDuration="28.93586646s" podCreationTimestamp="2025-05-17 00:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:40:30.935474771 +0000 UTC m=+34.207297365" watchObservedRunningTime="2025-05-17 00:40:30.93586646 +0000 UTC m=+34.207689054" May 17 00:40:35.312752 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:48500.service. May 17 00:40:35.356376 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 48500 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:35.357767 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:35.361843 systemd-logind[1189]: New session 9 of user core. May 17 00:40:35.362746 systemd[1]: Started session-9.scope. May 17 00:40:35.488921 sshd[3352]: pam_unix(sshd:session): session closed for user core May 17 00:40:35.491132 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:48500.service: Deactivated successfully. May 17 00:40:35.491782 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:40:35.492503 systemd-logind[1189]: Session 9 logged out. Waiting for processes to exit. May 17 00:40:35.493294 systemd-logind[1189]: Removed session 9. May 17 00:40:40.494247 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:48502.service. May 17 00:40:40.533244 sshd[3366]: Accepted publickey for core from 10.0.0.1 port 48502 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:40.534424 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:40.538222 systemd-logind[1189]: New session 10 of user core. May 17 00:40:40.539118 systemd[1]: Started session-10.scope. May 17 00:40:40.685674 sshd[3366]: pam_unix(sshd:session): session closed for user core May 17 00:40:40.688322 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:48502.service: Deactivated successfully. May 17 00:40:40.688832 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:40:40.689372 systemd-logind[1189]: Session 10 logged out. Waiting for processes to exit. May 17 00:40:40.690465 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:48510.service. May 17 00:40:40.691279 systemd-logind[1189]: Removed session 10. May 17 00:40:40.730834 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 48510 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:40.731824 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:40.735054 systemd-logind[1189]: New session 11 of user core. May 17 00:40:40.735775 systemd[1]: Started session-11.scope. May 17 00:40:40.902697 kubelet[1930]: E0517 00:40:40.901598 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:40.902697 kubelet[1930]: E0517 00:40:40.902595 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:40.924230 kubelet[1930]: E0517 00:40:40.921314 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:40.924230 kubelet[1930]: E0517 00:40:40.921593 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:41.308727 sshd[3380]: pam_unix(sshd:session): session closed for user core May 17 00:40:41.312659 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:48518.service. May 17 00:40:41.328395 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:48510.service: Deactivated successfully. May 17 00:40:41.329412 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:40:41.336247 systemd-logind[1189]: Session 11 logged out. Waiting for processes to exit. May 17 00:40:41.337394 systemd-logind[1189]: Removed session 11. May 17 00:40:41.401563 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 48518 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:41.412826 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:41.427576 systemd[1]: Started session-12.scope. May 17 00:40:41.428204 systemd-logind[1189]: New session 12 of user core. May 17 00:40:41.799341 sshd[3395]: pam_unix(sshd:session): session closed for user core May 17 00:40:41.814258 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:48518.service: Deactivated successfully. May 17 00:40:41.815056 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:40:41.818053 systemd-logind[1189]: Session 12 logged out. Waiting for processes to exit. May 17 00:40:41.820970 systemd-logind[1189]: Removed session 12. May 17 00:40:46.846374 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:48080.service. May 17 00:40:46.945618 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 48080 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:46.951680 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:46.966620 systemd-logind[1189]: New session 13 of user core. May 17 00:40:46.970506 systemd[1]: Started session-13.scope. May 17 00:40:47.287569 sshd[3410]: pam_unix(sshd:session): session closed for user core May 17 00:40:47.296656 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:48080.service: Deactivated successfully. May 17 00:40:47.297806 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:40:47.300722 systemd-logind[1189]: Session 13 logged out. Waiting for processes to exit. May 17 00:40:47.302054 systemd-logind[1189]: Removed session 13. May 17 00:40:52.307628 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:48084.service. May 17 00:40:52.405800 sshd[3424]: Accepted publickey for core from 10.0.0.1 port 48084 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:52.407361 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:52.416683 systemd-logind[1189]: New session 14 of user core. May 17 00:40:52.417778 systemd[1]: Started session-14.scope. May 17 00:40:52.687395 sshd[3424]: pam_unix(sshd:session): session closed for user core May 17 00:40:52.692721 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:48084.service: Deactivated successfully. May 17 00:40:52.693639 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:40:52.695698 systemd-logind[1189]: Session 14 logged out. Waiting for processes to exit. May 17 00:40:52.701999 systemd-logind[1189]: Removed session 14. May 17 00:40:57.692280 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:46398.service. May 17 00:40:57.732939 sshd[3440]: Accepted publickey for core from 10.0.0.1 port 46398 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:57.734225 sshd[3440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:57.738210 systemd-logind[1189]: New session 15 of user core. May 17 00:40:57.739364 systemd[1]: Started session-15.scope. May 17 00:40:57.843375 sshd[3440]: pam_unix(sshd:session): session closed for user core May 17 00:40:57.845571 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:46398.service: Deactivated successfully. May 17 00:40:57.846339 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:40:57.846941 systemd-logind[1189]: Session 15 logged out. Waiting for processes to exit. May 17 00:40:57.847688 systemd-logind[1189]: Removed session 15. May 17 00:41:02.860753 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:46400.service. May 17 00:41:02.938702 sshd[3454]: Accepted publickey for core from 10.0.0.1 port 46400 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:02.941724 sshd[3454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:02.960023 systemd-logind[1189]: New session 16 of user core. May 17 00:41:02.965876 systemd[1]: Started session-16.scope. May 17 00:41:03.225076 sshd[3454]: pam_unix(sshd:session): session closed for user core May 17 00:41:03.233262 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:46404.service. May 17 00:41:03.242399 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:46400.service: Deactivated successfully. May 17 00:41:03.243225 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:41:03.245451 systemd-logind[1189]: Session 16 logged out. Waiting for processes to exit. May 17 00:41:03.254966 systemd-logind[1189]: Removed session 16. May 17 00:41:03.296703 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 46404 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:03.298860 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:03.320464 systemd-logind[1189]: New session 17 of user core. May 17 00:41:03.326576 systemd[1]: Started session-17.scope. May 17 00:41:04.038243 sshd[3466]: pam_unix(sshd:session): session closed for user core May 17 00:41:04.049627 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:54520.service. May 17 00:41:04.054236 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:46404.service: Deactivated successfully. May 17 00:41:04.055223 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:41:04.059173 systemd-logind[1189]: Session 17 logged out. Waiting for processes to exit. May 17 00:41:04.061463 systemd-logind[1189]: Removed session 17. May 17 00:41:04.121339 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 54520 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:04.123263 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:04.139957 systemd[1]: Started session-18.scope. May 17 00:41:04.144202 systemd-logind[1189]: New session 18 of user core. May 17 00:41:05.756361 sshd[3480]: pam_unix(sshd:session): session closed for user core May 17 00:41:05.776808 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:54528.service. May 17 00:41:05.778212 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:54520.service: Deactivated successfully. May 17 00:41:05.779776 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:41:05.781059 systemd-logind[1189]: Session 18 logged out. Waiting for processes to exit. May 17 00:41:05.796715 systemd-logind[1189]: Removed session 18. May 17 00:41:05.822613 kubelet[1930]: E0517 00:41:05.819788 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:05.883556 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 54528 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:05.887289 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:05.910807 systemd-logind[1189]: New session 19 of user core. May 17 00:41:05.925907 systemd[1]: Started session-19.scope. May 17 00:41:06.535578 sshd[3500]: pam_unix(sshd:session): session closed for user core May 17 00:41:06.547658 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:54542.service. May 17 00:41:06.565475 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:54528.service: Deactivated successfully. May 17 00:41:06.566376 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:41:06.567623 systemd-logind[1189]: Session 19 logged out. Waiting for processes to exit. May 17 00:41:06.568899 systemd-logind[1189]: Removed session 19. May 17 00:41:06.649927 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 54542 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:06.652320 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:06.665477 systemd-logind[1189]: New session 20 of user core. May 17 00:41:06.670833 systemd[1]: Started session-20.scope. May 17 00:41:06.969816 sshd[3511]: pam_unix(sshd:session): session closed for user core May 17 00:41:06.983518 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:54542.service: Deactivated successfully. May 17 00:41:06.984483 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:41:06.989239 systemd-logind[1189]: Session 20 logged out. Waiting for processes to exit. May 17 00:41:06.990429 systemd-logind[1189]: Removed session 20. May 17 00:41:11.979851 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:54544.service. May 17 00:41:12.038930 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 54544 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:12.049033 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:12.060680 systemd-logind[1189]: New session 21 of user core. May 17 00:41:12.061273 systemd[1]: Started session-21.scope. May 17 00:41:12.410924 sshd[3525]: pam_unix(sshd:session): session closed for user core May 17 00:41:12.416588 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:54544.service: Deactivated successfully. May 17 00:41:12.417442 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:41:12.423079 systemd-logind[1189]: Session 21 logged out. Waiting for processes to exit. May 17 00:41:12.424190 systemd-logind[1189]: Removed session 21. May 17 00:41:17.420327 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:55892.service. May 17 00:41:17.488245 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 55892 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:17.502295 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:17.520936 systemd[1]: Started session-22.scope. May 17 00:41:17.522817 systemd-logind[1189]: New session 22 of user core. May 17 00:41:17.762044 sshd[3540]: pam_unix(sshd:session): session closed for user core May 17 00:41:17.765634 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:55892.service: Deactivated successfully. May 17 00:41:17.766565 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:41:17.772655 systemd-logind[1189]: Session 22 logged out. Waiting for processes to exit. May 17 00:41:17.788512 systemd-logind[1189]: Removed session 22. May 17 00:41:22.769738 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:55894.service. May 17 00:41:22.846596 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 55894 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:22.848583 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:22.855113 systemd-logind[1189]: New session 23 of user core. May 17 00:41:22.856346 systemd[1]: Started session-23.scope. May 17 00:41:23.028809 sshd[3554]: pam_unix(sshd:session): session closed for user core May 17 00:41:23.033624 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:55894.service: Deactivated successfully. May 17 00:41:23.034642 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:41:23.036984 systemd-logind[1189]: Session 23 logged out. Waiting for processes to exit. May 17 00:41:23.045454 systemd-logind[1189]: Removed session 23. May 17 00:41:23.819045 kubelet[1930]: E0517 00:41:23.818980 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:28.040593 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:47364.service. May 17 00:41:28.140300 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 47364 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:28.145073 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:28.156212 systemd-logind[1189]: New session 24 of user core. May 17 00:41:28.158179 systemd[1]: Started session-24.scope. May 17 00:41:28.400157 sshd[3567]: pam_unix(sshd:session): session closed for user core May 17 00:41:28.413356 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:47364.service: Deactivated successfully. May 17 00:41:28.414215 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:41:28.416559 systemd-logind[1189]: Session 24 logged out. Waiting for processes to exit. May 17 00:41:28.435585 systemd-logind[1189]: Removed session 24. May 17 00:41:30.825159 kubelet[1930]: E0517 00:41:30.825113 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:31.819350 kubelet[1930]: E0517 00:41:31.819282 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:31.819999 kubelet[1930]: E0517 00:41:31.819968 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:33.412434 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:47370.service. May 17 00:41:33.479969 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 47370 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:33.482268 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:33.504310 systemd[1]: Started session-25.scope. May 17 00:41:33.505072 systemd-logind[1189]: New session 25 of user core. May 17 00:41:33.765523 sshd[3580]: pam_unix(sshd:session): session closed for user core May 17 00:41:33.789770 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:41020.service. May 17 00:41:33.799707 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:47370.service: Deactivated successfully. May 17 00:41:33.800669 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:41:33.808343 systemd-logind[1189]: Session 25 logged out. Waiting for processes to exit. May 17 00:41:33.819631 systemd-logind[1189]: Removed session 25. May 17 00:41:33.828251 kubelet[1930]: E0517 00:41:33.820712 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:33.874634 sshd[3593]: Accepted publickey for core from 10.0.0.1 port 41020 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:33.877732 sshd[3593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:33.902241 systemd[1]: Started session-26.scope. May 17 00:41:33.908385 systemd-logind[1189]: New session 26 of user core. May 17 00:41:35.773680 systemd[1]: run-containerd-runc-k8s.io-aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c-runc.eCBgMt.mount: Deactivated successfully. May 17 00:41:35.782426 env[1201]: time="2025-05-17T00:41:35.782258555Z" level=info msg="StopContainer for \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\" with timeout 30 (s)" May 17 00:41:35.787218 env[1201]: time="2025-05-17T00:41:35.787130364Z" level=info msg="Stop container \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\" with signal terminated" May 17 00:41:35.834617 systemd[1]: cri-containerd-e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac.scope: Deactivated successfully. May 17 00:41:35.842584 env[1201]: time="2025-05-17T00:41:35.842519772Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:41:35.871320 env[1201]: time="2025-05-17T00:41:35.871273536Z" level=info msg="StopContainer for \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\" with timeout 2 (s)" May 17 00:41:35.871840 env[1201]: time="2025-05-17T00:41:35.871805867Z" level=info msg="Stop container \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\" with signal terminated" May 17 00:41:35.888869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac-rootfs.mount: Deactivated successfully. May 17 00:41:35.898301 systemd-networkd[1024]: lxc_health: Link DOWN May 17 00:41:35.898305 systemd-networkd[1024]: lxc_health: Lost carrier May 17 00:41:35.970328 env[1201]: time="2025-05-17T00:41:35.969971167Z" level=info msg="shim disconnected" id=e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac May 17 00:41:35.970328 env[1201]: time="2025-05-17T00:41:35.970032713Z" level=warning msg="cleaning up after shim disconnected" id=e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac namespace=k8s.io May 17 00:41:35.970328 env[1201]: time="2025-05-17T00:41:35.970045167Z" level=info msg="cleaning up dead shim" May 17 00:41:35.999128 systemd[1]: cri-containerd-aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c.scope: Deactivated successfully. May 17 00:41:35.999468 systemd[1]: cri-containerd-aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c.scope: Consumed 7.005s CPU time. May 17 00:41:36.017242 env[1201]: time="2025-05-17T00:41:36.007078222Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3650 runtime=io.containerd.runc.v2\n" May 17 00:41:36.051638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c-rootfs.mount: Deactivated successfully. May 17 00:41:36.149982 env[1201]: time="2025-05-17T00:41:36.144660733Z" level=info msg="StopContainer for \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\" returns successfully" May 17 00:41:36.152378 env[1201]: time="2025-05-17T00:41:36.152327574Z" level=info msg="StopPodSandbox for \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\"" May 17 00:41:36.152698 env[1201]: time="2025-05-17T00:41:36.152626256Z" level=info msg="Container to stop \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.164882 systemd[1]: cri-containerd-972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11.scope: Deactivated successfully. May 17 00:41:36.215875 env[1201]: time="2025-05-17T00:41:36.215801966Z" level=info msg="shim disconnected" id=aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c May 17 00:41:36.215875 env[1201]: time="2025-05-17T00:41:36.215872799Z" level=warning msg="cleaning up after shim disconnected" id=aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c namespace=k8s.io May 17 00:41:36.216133 env[1201]: time="2025-05-17T00:41:36.215888719Z" level=info msg="cleaning up dead shim" May 17 00:41:36.242432 env[1201]: time="2025-05-17T00:41:36.242335783Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3693 runtime=io.containerd.runc.v2\n" May 17 00:41:36.382002 env[1201]: time="2025-05-17T00:41:36.381449856Z" level=info msg="StopContainer for \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\" returns successfully" May 17 00:41:36.382736 env[1201]: time="2025-05-17T00:41:36.382708064Z" level=info msg="StopPodSandbox for \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\"" May 17 00:41:36.382899 env[1201]: time="2025-05-17T00:41:36.382869178Z" level=info msg="Container to stop \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.383015 env[1201]: time="2025-05-17T00:41:36.382988342Z" level=info msg="Container to stop \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.383125 env[1201]: time="2025-05-17T00:41:36.383083672Z" level=info msg="Container to stop \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.383235 env[1201]: time="2025-05-17T00:41:36.383208757Z" level=info msg="Container to stop \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.383332 env[1201]: time="2025-05-17T00:41:36.383306861Z" level=info msg="Container to stop \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.405618 systemd[1]: cri-containerd-2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e.scope: Deactivated successfully. May 17 00:41:36.414822 env[1201]: time="2025-05-17T00:41:36.413201315Z" level=info msg="shim disconnected" id=972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11 May 17 00:41:36.414822 env[1201]: time="2025-05-17T00:41:36.413274392Z" level=warning msg="cleaning up after shim disconnected" id=972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11 namespace=k8s.io May 17 00:41:36.414822 env[1201]: time="2025-05-17T00:41:36.413286905Z" level=info msg="cleaning up dead shim" May 17 00:41:36.479311 env[1201]: time="2025-05-17T00:41:36.479165513Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" May 17 00:41:36.481124 env[1201]: time="2025-05-17T00:41:36.480150297Z" level=info msg="TearDown network for sandbox \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" successfully" May 17 00:41:36.481124 env[1201]: time="2025-05-17T00:41:36.480181315Z" level=info msg="StopPodSandbox for \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" returns successfully" May 17 00:41:36.539980 env[1201]: time="2025-05-17T00:41:36.536949113Z" level=info msg="shim disconnected" id=2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e May 17 00:41:36.539980 env[1201]: time="2025-05-17T00:41:36.537012462Z" level=warning msg="cleaning up after shim disconnected" id=2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e namespace=k8s.io May 17 00:41:36.539980 env[1201]: time="2025-05-17T00:41:36.537025817Z" level=info msg="cleaning up dead shim" May 17 00:41:36.583900 env[1201]: time="2025-05-17T00:41:36.582472705Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3735 runtime=io.containerd.runc.v2\n" May 17 00:41:36.583900 env[1201]: time="2025-05-17T00:41:36.583418806Z" level=info msg="TearDown network for sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" successfully" May 17 00:41:36.586166 env[1201]: time="2025-05-17T00:41:36.585148602Z" level=info msg="StopPodSandbox for \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" returns successfully" May 17 00:41:36.609881 kubelet[1930]: I0517 00:41:36.609812 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mp98\" (UniqueName: \"kubernetes.io/projected/0d315840-9b2e-4732-81a9-8d50fbd1700e-kube-api-access-2mp98\") pod \"0d315840-9b2e-4732-81a9-8d50fbd1700e\" (UID: \"0d315840-9b2e-4732-81a9-8d50fbd1700e\") " May 17 00:41:36.609881 kubelet[1930]: I0517 00:41:36.609845 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d315840-9b2e-4732-81a9-8d50fbd1700e-cilium-config-path\") pod \"0d315840-9b2e-4732-81a9-8d50fbd1700e\" (UID: \"0d315840-9b2e-4732-81a9-8d50fbd1700e\") " May 17 00:41:36.620444 kubelet[1930]: I0517 00:41:36.620387 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d315840-9b2e-4732-81a9-8d50fbd1700e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d315840-9b2e-4732-81a9-8d50fbd1700e" (UID: "0d315840-9b2e-4732-81a9-8d50fbd1700e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:41:36.646838 kubelet[1930]: I0517 00:41:36.646786 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d315840-9b2e-4732-81a9-8d50fbd1700e-kube-api-access-2mp98" (OuterVolumeSpecName: "kube-api-access-2mp98") pod "0d315840-9b2e-4732-81a9-8d50fbd1700e" (UID: "0d315840-9b2e-4732-81a9-8d50fbd1700e"). InnerVolumeSpecName "kube-api-access-2mp98". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:36.716083 kubelet[1930]: I0517 00:41:36.715999 1930 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2mp98\" (UniqueName: \"kubernetes.io/projected/0d315840-9b2e-4732-81a9-8d50fbd1700e-kube-api-access-2mp98\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.716083 kubelet[1930]: I0517 00:41:36.716055 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d315840-9b2e-4732-81a9-8d50fbd1700e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.773252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e-rootfs.mount: Deactivated successfully. May 17 00:41:36.773357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e-shm.mount: Deactivated successfully. May 17 00:41:36.773436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11-rootfs.mount: Deactivated successfully. May 17 00:41:36.773507 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11-shm.mount: Deactivated successfully. May 17 00:41:36.773572 systemd[1]: var-lib-kubelet-pods-0d315840\x2d9b2e\x2d4732\x2d81a9\x2d8d50fbd1700e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mp98.mount: Deactivated successfully. May 17 00:41:36.820634 kubelet[1930]: I0517 00:41:36.816940 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-kernel\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.820634 kubelet[1930]: I0517 00:41:36.817888 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-bpf-maps\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.824979 kubelet[1930]: I0517 00:41:36.822969 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a71154ac-d7bc-4377-905d-b04e4476e2c6-clustermesh-secrets\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.824979 kubelet[1930]: I0517 00:41:36.822999 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n77bm\" (UniqueName: \"kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-kube-api-access-n77bm\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.824979 kubelet[1930]: I0517 00:41:36.823017 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-net\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.824979 kubelet[1930]: I0517 00:41:36.823033 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-lib-modules\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.824979 kubelet[1930]: I0517 00:41:36.823052 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-etc-cni-netd\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.824979 kubelet[1930]: I0517 00:41:36.823071 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-xtables-lock\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.825255 kubelet[1930]: I0517 00:41:36.823109 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-hostproc\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.825255 kubelet[1930]: I0517 00:41:36.823132 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-config-path\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.825255 kubelet[1930]: I0517 00:41:36.823153 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cni-path\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.825255 kubelet[1930]: I0517 00:41:36.823174 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-run\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.825255 kubelet[1930]: I0517 00:41:36.823196 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-hubble-tls\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.825255 kubelet[1930]: I0517 00:41:36.823214 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-cgroup\") pod \"a71154ac-d7bc-4377-905d-b04e4476e2c6\" (UID: \"a71154ac-d7bc-4377-905d-b04e4476e2c6\") " May 17 00:41:36.825485 kubelet[1930]: I0517 00:41:36.817434 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.825485 kubelet[1930]: I0517 00:41:36.819325 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.842941 systemd[1]: var-lib-kubelet-pods-a71154ac\x2dd7bc\x2d4377\x2d905d\x2db04e4476e2c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn77bm.mount: Deactivated successfully. May 17 00:41:36.843042 systemd[1]: var-lib-kubelet-pods-a71154ac\x2dd7bc\x2d4377\x2d905d\x2db04e4476e2c6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:41:36.844233 kubelet[1930]: I0517 00:41:36.844197 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.844369 kubelet[1930]: I0517 00:41:36.844351 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.844464 kubelet[1930]: I0517 00:41:36.844446 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.844565 kubelet[1930]: I0517 00:41:36.844545 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.844677 kubelet[1930]: I0517 00:41:36.844659 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-hostproc" (OuterVolumeSpecName: "hostproc") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.847066 kubelet[1930]: I0517 00:41:36.847047 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:41:36.847195 kubelet[1930]: I0517 00:41:36.847176 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cni-path" (OuterVolumeSpecName: "cni-path") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.847301 kubelet[1930]: I0517 00:41:36.847284 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.847641 kubelet[1930]: I0517 00:41:36.847620 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.864491 systemd[1]: var-lib-kubelet-pods-a71154ac\x2dd7bc\x2d4377\x2d905d\x2db04e4476e2c6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:41:36.882543 kubelet[1930]: I0517 00:41:36.882509 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71154ac-d7bc-4377-905d-b04e4476e2c6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:41:36.885357 systemd[1]: Removed slice kubepods-besteffort-pod0d315840_9b2e_4732_81a9_8d50fbd1700e.slice. May 17 00:41:36.890570 kubelet[1930]: E0517 00:41:36.890540 1930 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:41:36.911180 kubelet[1930]: I0517 00:41:36.898563 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:36.911180 kubelet[1930]: I0517 00:41:36.898653 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-kube-api-access-n77bm" (OuterVolumeSpecName: "kube-api-access-n77bm") pod "a71154ac-d7bc-4377-905d-b04e4476e2c6" (UID: "a71154ac-d7bc-4377-905d-b04e4476e2c6"). InnerVolumeSpecName "kube-api-access-n77bm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:36.924113 kubelet[1930]: I0517 00:41:36.924037 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-run\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924113 kubelet[1930]: I0517 00:41:36.924082 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924113 kubelet[1930]: I0517 00:41:36.924094 1930 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cni-path\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924113 kubelet[1930]: I0517 00:41:36.924123 1930 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924134 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924145 1930 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924156 1930 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n77bm\" (UniqueName: \"kubernetes.io/projected/a71154ac-d7bc-4377-905d-b04e4476e2c6-kube-api-access-n77bm\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924168 1930 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924180 1930 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924191 1930 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a71154ac-d7bc-4377-905d-b04e4476e2c6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924201 1930 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-lib-modules\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924379 kubelet[1930]: I0517 00:41:36.924210 1930 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-hostproc\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924593 kubelet[1930]: I0517 00:41:36.924221 1930 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 17 00:41:36.924593 kubelet[1930]: I0517 00:41:36.924232 1930 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a71154ac-d7bc-4377-905d-b04e4476e2c6-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 17 00:41:37.210575 kubelet[1930]: I0517 00:41:37.210477 1930 scope.go:117] "RemoveContainer" containerID="e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac" May 17 00:41:37.216204 env[1201]: time="2025-05-17T00:41:37.216163465Z" level=info msg="RemoveContainer for \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\"" May 17 00:41:37.230849 systemd[1]: Removed slice kubepods-burstable-poda71154ac_d7bc_4377_905d_b04e4476e2c6.slice. May 17 00:41:37.230966 systemd[1]: kubepods-burstable-poda71154ac_d7bc_4377_905d_b04e4476e2c6.slice: Consumed 7.107s CPU time. May 17 00:41:37.252301 env[1201]: time="2025-05-17T00:41:37.252129501Z" level=info msg="RemoveContainer for \"e20b663ced8a8443c3afff9053227f63efd92166c54052f875905d66ae8e7bac\" returns successfully" May 17 00:41:37.259325 kubelet[1930]: I0517 00:41:37.259240 1930 scope.go:117] "RemoveContainer" containerID="aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c" May 17 00:41:37.267469 env[1201]: time="2025-05-17T00:41:37.267121879Z" level=info msg="RemoveContainer for \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\"" May 17 00:41:37.303136 env[1201]: time="2025-05-17T00:41:37.302940608Z" level=info msg="RemoveContainer for \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\" returns successfully" May 17 00:41:37.303340 kubelet[1930]: I0517 00:41:37.303306 1930 scope.go:117] "RemoveContainer" containerID="c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5" May 17 00:41:37.313574 env[1201]: time="2025-05-17T00:41:37.313529596Z" level=info msg="RemoveContainer for \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\"" May 17 00:41:37.326074 env[1201]: time="2025-05-17T00:41:37.326001537Z" level=info msg="RemoveContainer for \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\" returns successfully" May 17 00:41:37.326445 kubelet[1930]: I0517 00:41:37.326339 1930 scope.go:117] "RemoveContainer" containerID="0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898" May 17 00:41:37.329824 env[1201]: time="2025-05-17T00:41:37.329640699Z" level=info msg="RemoveContainer for \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\"" May 17 00:41:37.343233 env[1201]: time="2025-05-17T00:41:37.343030239Z" level=info msg="RemoveContainer for \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\" returns successfully" May 17 00:41:37.345217 kubelet[1930]: I0517 00:41:37.345172 1930 scope.go:117] "RemoveContainer" containerID="91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be" May 17 00:41:37.361001 env[1201]: time="2025-05-17T00:41:37.358291693Z" level=info msg="RemoveContainer for \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\"" May 17 00:41:37.372357 env[1201]: time="2025-05-17T00:41:37.371517164Z" level=info msg="RemoveContainer for \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\" returns successfully" May 17 00:41:37.374064 kubelet[1930]: I0517 00:41:37.373242 1930 scope.go:117] "RemoveContainer" containerID="1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5" May 17 00:41:37.402571 env[1201]: time="2025-05-17T00:41:37.402501471Z" level=info msg="RemoveContainer for \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\"" May 17 00:41:37.411904 env[1201]: time="2025-05-17T00:41:37.409509245Z" level=info msg="RemoveContainer for \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\" returns successfully" May 17 00:41:37.411904 env[1201]: time="2025-05-17T00:41:37.410383751Z" level=error msg="ContainerStatus for \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\": not found" May 17 00:41:37.411904 env[1201]: time="2025-05-17T00:41:37.410817218Z" level=error msg="ContainerStatus for \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\": not found" May 17 00:41:37.411904 env[1201]: time="2025-05-17T00:41:37.411069222Z" level=error msg="ContainerStatus for \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\": not found" May 17 00:41:37.411904 env[1201]: time="2025-05-17T00:41:37.411367875Z" level=error msg="ContainerStatus for \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\": not found" May 17 00:41:37.411904 env[1201]: time="2025-05-17T00:41:37.411592958Z" level=error msg="ContainerStatus for \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\": not found" May 17 00:41:37.412248 kubelet[1930]: I0517 00:41:37.409740 1930 scope.go:117] "RemoveContainer" containerID="aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c" May 17 00:41:37.412248 kubelet[1930]: E0517 00:41:37.410553 1930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\": not found" containerID="aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c" May 17 00:41:37.412248 kubelet[1930]: I0517 00:41:37.410579 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c"} err="failed to get container status \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\": rpc error: code = NotFound desc = an error occurred when try to find container \"aca2e760eb8fbfb8c3210a4e0d6c3244281541260c53e1248cb25731b027900c\": not found" May 17 00:41:37.412248 kubelet[1930]: I0517 00:41:37.410673 1930 scope.go:117] "RemoveContainer" containerID="c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5" May 17 00:41:37.412248 kubelet[1930]: E0517 00:41:37.410904 1930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\": not found" containerID="c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5" May 17 00:41:37.412248 kubelet[1930]: I0517 00:41:37.410932 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5"} err="failed to get container status \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c035a25092c0f09389ce495139ed71d87f40d00a2dc29104d5757c8091a534b5\": not found" May 17 00:41:37.412248 kubelet[1930]: I0517 00:41:37.410947 1930 scope.go:117] "RemoveContainer" containerID="0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898" May 17 00:41:37.412560 kubelet[1930]: E0517 00:41:37.411199 1930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\": not found" containerID="0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898" May 17 00:41:37.412560 kubelet[1930]: I0517 00:41:37.411219 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898"} err="failed to get container status \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\": rpc error: code = NotFound desc = an error occurred when try to find container \"0248af8319109491e4dd999ac442b7d266a9b094bc8b252ab6aa1ef5638f4898\": not found" May 17 00:41:37.412560 kubelet[1930]: I0517 00:41:37.411237 1930 scope.go:117] "RemoveContainer" containerID="91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be" May 17 00:41:37.412560 kubelet[1930]: E0517 00:41:37.411454 1930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\": not found" containerID="91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be" May 17 00:41:37.412560 kubelet[1930]: I0517 00:41:37.411471 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be"} err="failed to get container status \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\": rpc error: code = NotFound desc = an error occurred when try to find container \"91afe2f64d294703d37335520e8759f58af3d4b9e4d403c4cc74a58a9250f3be\": not found" May 17 00:41:37.412560 kubelet[1930]: I0517 00:41:37.411483 1930 scope.go:117] "RemoveContainer" containerID="1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5" May 17 00:41:37.412713 kubelet[1930]: E0517 00:41:37.411677 1930 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\": not found" containerID="1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5" May 17 00:41:37.412713 kubelet[1930]: I0517 00:41:37.411694 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5"} err="failed to get container status \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"1be2804bae8291a0b83e525c526315d899dc091a76b9ca0c32685f0209c1e3a5\": not found" May 17 00:41:37.610485 sshd[3593]: pam_unix(sshd:session): session closed for user core May 17 00:41:37.619116 systemd[1]: Started sshd@26-10.0.0.137:22-10.0.0.1:41034.service. May 17 00:41:37.642365 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:41020.service: Deactivated successfully. May 17 00:41:37.650430 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:41:37.652279 systemd-logind[1189]: Session 26 logged out. Waiting for processes to exit. May 17 00:41:37.665261 systemd-logind[1189]: Removed session 26. May 17 00:41:37.707424 sshd[3752]: Accepted publickey for core from 10.0.0.1 port 41034 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:37.709095 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:37.745249 systemd-logind[1189]: New session 27 of user core. May 17 00:41:37.752684 systemd[1]: Started session-27.scope. May 17 00:41:38.822897 kubelet[1930]: I0517 00:41:38.822817 1930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d315840-9b2e-4732-81a9-8d50fbd1700e" path="/var/lib/kubelet/pods/0d315840-9b2e-4732-81a9-8d50fbd1700e/volumes" May 17 00:41:38.823325 kubelet[1930]: I0517 00:41:38.823262 1930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a71154ac-d7bc-4377-905d-b04e4476e2c6" path="/var/lib/kubelet/pods/a71154ac-d7bc-4377-905d-b04e4476e2c6/volumes" May 17 00:41:38.901841 sshd[3752]: pam_unix(sshd:session): session closed for user core May 17 00:41:38.905854 systemd[1]: Started sshd@27-10.0.0.137:22-10.0.0.1:41040.service. May 17 00:41:38.922808 systemd[1]: sshd@26-10.0.0.137:22-10.0.0.1:41034.service: Deactivated successfully. May 17 00:41:38.923793 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:41:38.925143 systemd-logind[1189]: Session 27 logged out. Waiting for processes to exit. May 17 00:41:38.934568 systemd-logind[1189]: Removed session 27. May 17 00:41:39.017740 kubelet[1930]: I0517 00:41:39.016449 1930 memory_manager.go:355] "RemoveStaleState removing state" podUID="0d315840-9b2e-4732-81a9-8d50fbd1700e" containerName="cilium-operator" May 17 00:41:39.017740 kubelet[1930]: I0517 00:41:39.016494 1930 memory_manager.go:355] "RemoveStaleState removing state" podUID="a71154ac-d7bc-4377-905d-b04e4476e2c6" containerName="cilium-agent" May 17 00:41:39.021542 sshd[3764]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:39.034698 sshd[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.049570 systemd[1]: Created slice kubepods-burstable-pod7196e1f8_053b_48fd_acd5_088487a86778.slice. May 17 00:41:39.062066 systemd[1]: Started session-28.scope. May 17 00:41:39.073400 kubelet[1930]: I0517 00:41:39.062645 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-bpf-maps\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073400 kubelet[1930]: I0517 00:41:39.062686 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-clustermesh-secrets\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073400 kubelet[1930]: I0517 00:41:39.062710 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgpw\" (UniqueName: \"kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-kube-api-access-zhgpw\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073400 kubelet[1930]: I0517 00:41:39.062732 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-hostproc\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073400 kubelet[1930]: I0517 00:41:39.062752 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-cilium-ipsec-secrets\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073400 kubelet[1930]: I0517 00:41:39.062773 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-run\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.072312 systemd-logind[1189]: New session 28 of user core. May 17 00:41:39.073776 kubelet[1930]: I0517 00:41:39.062791 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-lib-modules\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073776 kubelet[1930]: I0517 00:41:39.062810 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-xtables-lock\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073776 kubelet[1930]: I0517 00:41:39.062829 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-kernel\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073776 kubelet[1930]: I0517 00:41:39.062849 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-etc-cni-netd\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073776 kubelet[1930]: I0517 00:41:39.062873 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-hubble-tls\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073776 kubelet[1930]: I0517 00:41:39.062927 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-net\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073999 kubelet[1930]: I0517 00:41:39.062948 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-cgroup\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073999 kubelet[1930]: I0517 00:41:39.062965 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cni-path\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.073999 kubelet[1930]: I0517 00:41:39.062986 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7196e1f8-053b-48fd-acd5-088487a86778-cilium-config-path\") pod \"cilium-mmzw2\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " pod="kube-system/cilium-mmzw2" May 17 00:41:39.334123 kubelet[1930]: I0517 00:41:39.331894 1930 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:41:39Z","lastTransitionTime":"2025-05-17T00:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:41:39.359578 kubelet[1930]: E0517 00:41:39.359540 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:39.368026 env[1201]: time="2025-05-17T00:41:39.361994148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmzw2,Uid:7196e1f8-053b-48fd-acd5-088487a86778,Namespace:kube-system,Attempt:0,}" May 17 00:41:39.445029 env[1201]: time="2025-05-17T00:41:39.444533208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:39.445029 env[1201]: time="2025-05-17T00:41:39.444633988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:39.445029 env[1201]: time="2025-05-17T00:41:39.444692649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:39.454150 env[1201]: time="2025-05-17T00:41:39.445189284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6 pid=3790 runtime=io.containerd.runc.v2 May 17 00:41:39.487548 sshd[3764]: pam_unix(sshd:session): session closed for user core May 17 00:41:39.501494 systemd[1]: Started sshd@28-10.0.0.137:22-10.0.0.1:41050.service. May 17 00:41:39.535579 systemd[1]: sshd@27-10.0.0.137:22-10.0.0.1:41040.service: Deactivated successfully. May 17 00:41:39.536638 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:41:39.538566 systemd-logind[1189]: Session 28 logged out. Waiting for processes to exit. May 17 00:41:39.543131 systemd-logind[1189]: Removed session 28. May 17 00:41:39.554567 systemd[1]: Started cri-containerd-44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6.scope. May 17 00:41:39.611951 sshd[3812]: Accepted publickey for core from 10.0.0.1 port 41050 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:41:39.614062 sshd[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:39.667040 systemd[1]: Started session-29.scope. May 17 00:41:39.668773 systemd-logind[1189]: New session 29 of user core. May 17 00:41:39.776373 env[1201]: time="2025-05-17T00:41:39.776323689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmzw2,Uid:7196e1f8-053b-48fd-acd5-088487a86778,Namespace:kube-system,Attempt:0,} returns sandbox id \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\"" May 17 00:41:39.778417 kubelet[1930]: E0517 00:41:39.777217 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:39.795285 env[1201]: time="2025-05-17T00:41:39.795233548Z" level=info msg="CreateContainer within sandbox \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:41:39.891933 env[1201]: time="2025-05-17T00:41:39.888274119Z" level=info msg="CreateContainer within sandbox \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\"" May 17 00:41:39.893430 env[1201]: time="2025-05-17T00:41:39.892780447Z" level=info msg="StartContainer for \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\"" May 17 00:41:39.938045 systemd[1]: Started cri-containerd-2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b.scope. May 17 00:41:39.952185 systemd[1]: cri-containerd-2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b.scope: Deactivated successfully. May 17 00:41:39.997995 env[1201]: time="2025-05-17T00:41:39.997870465Z" level=info msg="shim disconnected" id=2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b May 17 00:41:39.997995 env[1201]: time="2025-05-17T00:41:39.997959101Z" level=warning msg="cleaning up after shim disconnected" id=2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b namespace=k8s.io May 17 00:41:39.997995 env[1201]: time="2025-05-17T00:41:39.997972287Z" level=info msg="cleaning up dead shim" May 17 00:41:40.026937 env[1201]: time="2025-05-17T00:41:40.021628373Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3862 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:41:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:41:40.030947 env[1201]: time="2025-05-17T00:41:40.023477827Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" May 17 00:41:40.033303 env[1201]: time="2025-05-17T00:41:40.031230353Z" level=error msg="Failed to pipe stderr of container \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\"" error="reading from a closed fifo" May 17 00:41:40.033303 env[1201]: time="2025-05-17T00:41:40.033228899Z" level=error msg="Failed to pipe stdout of container \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\"" error="reading from a closed fifo" May 17 00:41:40.045371 env[1201]: time="2025-05-17T00:41:40.044631300Z" level=error msg="StartContainer for \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:41:40.045555 kubelet[1930]: E0517 00:41:40.045029 1930 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b" May 17 00:41:40.055712 kubelet[1930]: E0517 00:41:40.054985 1930 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 17 00:41:40.055712 kubelet[1930]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:41:40.055712 kubelet[1930]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:41:40.055712 kubelet[1930]: rm /hostbin/cilium-mount May 17 00:41:40.056029 kubelet[1930]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhgpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mmzw2_kube-system(7196e1f8-053b-48fd-acd5-088487a86778): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:41:40.056029 kubelet[1930]: > logger="UnhandledError" May 17 00:41:40.057003 kubelet[1930]: E0517 00:41:40.056922 1930 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mmzw2" podUID="7196e1f8-053b-48fd-acd5-088487a86778" May 17 00:41:40.257494 env[1201]: time="2025-05-17T00:41:40.256679165Z" level=info msg="StopPodSandbox for \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\"" May 17 00:41:40.257713 env[1201]: time="2025-05-17T00:41:40.257686754Z" level=info msg="Container to stop \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:40.262748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6-shm.mount: Deactivated successfully. May 17 00:41:40.273801 systemd[1]: cri-containerd-44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6.scope: Deactivated successfully. May 17 00:41:40.330235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6-rootfs.mount: Deactivated successfully. May 17 00:41:40.359978 env[1201]: time="2025-05-17T00:41:40.358747721Z" level=info msg="shim disconnected" id=44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6 May 17 00:41:40.359978 env[1201]: time="2025-05-17T00:41:40.358803717Z" level=warning msg="cleaning up after shim disconnected" id=44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6 namespace=k8s.io May 17 00:41:40.359978 env[1201]: time="2025-05-17T00:41:40.358815599Z" level=info msg="cleaning up dead shim" May 17 00:41:40.387989 env[1201]: time="2025-05-17T00:41:40.387903250Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3891 runtime=io.containerd.runc.v2\n" May 17 00:41:40.388478 env[1201]: time="2025-05-17T00:41:40.388416767Z" level=info msg="TearDown network for sandbox \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" successfully" May 17 00:41:40.388478 env[1201]: time="2025-05-17T00:41:40.388444689Z" level=info msg="StopPodSandbox for \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" returns successfully" May 17 00:41:40.578336 kubelet[1930]: I0517 00:41:40.578208 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-run\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.578604 kubelet[1930]: I0517 00:41:40.578576 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7196e1f8-053b-48fd-acd5-088487a86778-cilium-config-path\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.578719 kubelet[1930]: I0517 00:41:40.578698 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-cgroup\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.578830 kubelet[1930]: I0517 00:41:40.578811 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cni-path\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.578958 kubelet[1930]: I0517 00:41:40.578938 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-hostproc\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.579072 kubelet[1930]: I0517 00:41:40.579052 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-kernel\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.579214 kubelet[1930]: I0517 00:41:40.579194 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-bpf-maps\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.579320 kubelet[1930]: I0517 00:41:40.579301 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-etc-cni-netd\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.579436 kubelet[1930]: I0517 00:41:40.579416 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-clustermesh-secrets\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.579556 kubelet[1930]: I0517 00:41:40.579536 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhgpw\" (UniqueName: \"kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-kube-api-access-zhgpw\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.579662 kubelet[1930]: I0517 00:41:40.579640 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-xtables-lock\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.579773 kubelet[1930]: I0517 00:41:40.579754 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-lib-modules\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.588688 kubelet[1930]: I0517 00:41:40.580769 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.588688 kubelet[1930]: I0517 00:41:40.580821 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.588688 kubelet[1930]: I0517 00:41:40.583749 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7196e1f8-053b-48fd-acd5-088487a86778-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:41:40.588688 kubelet[1930]: I0517 00:41:40.583796 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.588688 kubelet[1930]: I0517 00:41:40.583817 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cni-path" (OuterVolumeSpecName: "cni-path") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.588688 kubelet[1930]: I0517 00:41:40.583836 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-hostproc" (OuterVolumeSpecName: "hostproc") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.589905 kubelet[1930]: I0517 00:41:40.589202 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.589905 kubelet[1930]: I0517 00:41:40.589609 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.589905 kubelet[1930]: I0517 00:41:40.589635 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.589905 kubelet[1930]: I0517 00:41:40.589687 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-hubble-tls\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.589905 kubelet[1930]: I0517 00:41:40.589714 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-net\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.589905 kubelet[1930]: I0517 00:41:40.589744 1930 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-cilium-ipsec-secrets\") pod \"7196e1f8-053b-48fd-acd5-088487a86778\" (UID: \"7196e1f8-053b-48fd-acd5-088487a86778\") " May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597150 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597190 1930 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cni-path\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597201 1930 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-hostproc\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597211 1930 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597222 1930 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597232 1930 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597242 1930 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-lib-modules\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597252 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-cilium-run\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.597263 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7196e1f8-053b-48fd-acd5-088487a86778-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.601266 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.604291 kubelet[1930]: I0517 00:41:40.601332 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.613493 systemd[1]: var-lib-kubelet-pods-7196e1f8\x2d053b\x2d48fd\x2dacd5\x2d088487a86778-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhgpw.mount: Deactivated successfully. May 17 00:41:40.613619 systemd[1]: var-lib-kubelet-pods-7196e1f8\x2d053b\x2d48fd\x2dacd5\x2d088487a86778-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:41:40.613694 systemd[1]: var-lib-kubelet-pods-7196e1f8\x2d053b\x2d48fd\x2dacd5\x2d088487a86778-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:41:40.626526 systemd[1]: var-lib-kubelet-pods-7196e1f8\x2d053b\x2d48fd\x2dacd5\x2d088487a86778-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:41:40.633326 kubelet[1930]: I0517 00:41:40.629397 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-kube-api-access-zhgpw" (OuterVolumeSpecName: "kube-api-access-zhgpw") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "kube-api-access-zhgpw". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:40.648892 kubelet[1930]: I0517 00:41:40.648160 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:40.655661 kubelet[1930]: I0517 00:41:40.654435 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:41:40.655661 kubelet[1930]: I0517 00:41:40.654606 1930 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7196e1f8-053b-48fd-acd5-088487a86778" (UID: "7196e1f8-053b-48fd-acd5-088487a86778"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:41:40.698338 kubelet[1930]: I0517 00:41:40.698167 1930 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.698338 kubelet[1930]: I0517 00:41:40.698213 1930 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhgpw\" (UniqueName: \"kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-kube-api-access-zhgpw\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.698338 kubelet[1930]: I0517 00:41:40.698227 1930 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7196e1f8-053b-48fd-acd5-088487a86778-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.698338 kubelet[1930]: I0517 00:41:40.698238 1930 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7196e1f8-053b-48fd-acd5-088487a86778-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.698338 kubelet[1930]: I0517 00:41:40.698249 1930 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.698338 kubelet[1930]: I0517 00:41:40.698260 1930 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7196e1f8-053b-48fd-acd5-088487a86778-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 17 00:41:40.842579 systemd[1]: Removed slice kubepods-burstable-pod7196e1f8_053b_48fd_acd5_088487a86778.slice. May 17 00:41:41.258380 kubelet[1930]: I0517 00:41:41.258347 1930 scope.go:117] "RemoveContainer" containerID="2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b" May 17 00:41:41.261175 env[1201]: time="2025-05-17T00:41:41.260826010Z" level=info msg="RemoveContainer for \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\"" May 17 00:41:41.277760 env[1201]: time="2025-05-17T00:41:41.277217768Z" level=info msg="RemoveContainer for \"2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b\" returns successfully" May 17 00:41:41.398222 kubelet[1930]: I0517 00:41:41.397797 1930 memory_manager.go:355] "RemoveStaleState removing state" podUID="7196e1f8-053b-48fd-acd5-088487a86778" containerName="mount-cgroup" May 17 00:41:41.417118 systemd[1]: Created slice kubepods-burstable-pod4b670add_3a78_419b_82ae_1d0716ffdcdf.slice. May 17 00:41:41.509005 kubelet[1930]: I0517 00:41:41.508858 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4b670add-3a78-419b-82ae-1d0716ffdcdf-cilium-ipsec-secrets\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509005 kubelet[1930]: I0517 00:41:41.508917 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-cilium-run\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509005 kubelet[1930]: I0517 00:41:41.508948 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-hostproc\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509005 kubelet[1930]: I0517 00:41:41.508969 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4b670add-3a78-419b-82ae-1d0716ffdcdf-clustermesh-secrets\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509005 kubelet[1930]: I0517 00:41:41.508992 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b670add-3a78-419b-82ae-1d0716ffdcdf-cilium-config-path\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509018 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcp8k\" (UniqueName: \"kubernetes.io/projected/4b670add-3a78-419b-82ae-1d0716ffdcdf-kube-api-access-jcp8k\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509076 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-cni-path\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509114 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-lib-modules\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509139 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-bpf-maps\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509161 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-etc-cni-netd\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509181 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-host-proc-sys-kernel\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509203 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4b670add-3a78-419b-82ae-1d0716ffdcdf-hubble-tls\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509225 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-host-proc-sys-net\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509257 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-cilium-cgroup\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.509361 kubelet[1930]: I0517 00:41:41.509279 1930 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b670add-3a78-419b-82ae-1d0716ffdcdf-xtables-lock\") pod \"cilium-lrtxh\" (UID: \"4b670add-3a78-419b-82ae-1d0716ffdcdf\") " pod="kube-system/cilium-lrtxh" May 17 00:41:41.731897 kubelet[1930]: E0517 00:41:41.729356 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:41.732068 env[1201]: time="2025-05-17T00:41:41.729983922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrtxh,Uid:4b670add-3a78-419b-82ae-1d0716ffdcdf,Namespace:kube-system,Attempt:0,}" May 17 00:41:41.774412 env[1201]: time="2025-05-17T00:41:41.774254944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:41.774639 env[1201]: time="2025-05-17T00:41:41.774609623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:41.774750 env[1201]: time="2025-05-17T00:41:41.774721944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:41.775022 env[1201]: time="2025-05-17T00:41:41.774991713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f pid=3919 runtime=io.containerd.runc.v2 May 17 00:41:41.805024 systemd[1]: Started cri-containerd-867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f.scope. May 17 00:41:41.866595 env[1201]: time="2025-05-17T00:41:41.866511460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrtxh,Uid:4b670add-3a78-419b-82ae-1d0716ffdcdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\"" May 17 00:41:41.868032 kubelet[1930]: E0517 00:41:41.867515 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:41.869461 env[1201]: time="2025-05-17T00:41:41.869427504Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:41:41.891646 kubelet[1930]: E0517 00:41:41.891585 1930 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:41:41.915063 env[1201]: time="2025-05-17T00:41:41.914742343Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c\"" May 17 00:41:41.916267 env[1201]: time="2025-05-17T00:41:41.916220158Z" level=info msg="StartContainer for \"a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c\"" May 17 00:41:41.975429 systemd[1]: Started cri-containerd-a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c.scope. May 17 00:41:42.103153 systemd[1]: cri-containerd-a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c.scope: Deactivated successfully. May 17 00:41:42.108396 env[1201]: time="2025-05-17T00:41:42.108298512Z" level=info msg="StartContainer for \"a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c\" returns successfully" May 17 00:41:42.236123 env[1201]: time="2025-05-17T00:41:42.236019510Z" level=info msg="shim disconnected" id=a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c May 17 00:41:42.236123 env[1201]: time="2025-05-17T00:41:42.236082699Z" level=warning msg="cleaning up after shim disconnected" id=a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c namespace=k8s.io May 17 00:41:42.236123 env[1201]: time="2025-05-17T00:41:42.236094772Z" level=info msg="cleaning up dead shim" May 17 00:41:42.289173 kubelet[1930]: E0517 00:41:42.265058 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:42.823784 kubelet[1930]: I0517 00:41:42.823141 1930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7196e1f8-053b-48fd-acd5-088487a86778" path="/var/lib/kubelet/pods/7196e1f8-053b-48fd-acd5-088487a86778/volumes" May 17 00:41:43.123238 kubelet[1930]: W0517 00:41:43.121899 1930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7196e1f8_053b_48fd_acd5_088487a86778.slice/cri-containerd-2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b.scope WatchSource:0}: container "2f2b7bc4f753882d9dae8118206b7376292b9123552a4cd387c24a37ab07457b" in namespace "k8s.io": not found May 17 00:41:43.264844 kubelet[1930]: E0517 00:41:43.264436 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:43.268552 env[1201]: time="2025-05-17T00:41:43.266806591Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:41:43.302937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362537158.mount: Deactivated successfully. May 17 00:41:43.346548 env[1201]: time="2025-05-17T00:41:43.346451707Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff\"" May 17 00:41:43.348870 env[1201]: time="2025-05-17T00:41:43.347462724Z" level=info msg="StartContainer for \"ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff\"" May 17 00:41:43.442878 systemd[1]: Started cri-containerd-ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff.scope. May 17 00:41:43.517573 env[1201]: time="2025-05-17T00:41:43.516296117Z" level=info msg="StartContainer for \"ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff\" returns successfully" May 17 00:41:43.526319 systemd[1]: cri-containerd-ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff.scope: Deactivated successfully. May 17 00:41:43.667082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff-rootfs.mount: Deactivated successfully. May 17 00:41:43.804532 env[1201]: time="2025-05-17T00:41:43.804378252Z" level=info msg="shim disconnected" id=ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff May 17 00:41:43.804532 env[1201]: time="2025-05-17T00:41:43.804443946Z" level=warning msg="cleaning up after shim disconnected" id=ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff namespace=k8s.io May 17 00:41:43.804532 env[1201]: time="2025-05-17T00:41:43.804459355Z" level=info msg="cleaning up dead shim" May 17 00:41:43.830459 env[1201]: time="2025-05-17T00:41:43.830372383Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" May 17 00:41:44.273121 kubelet[1930]: E0517 00:41:44.267750 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:44.280940 env[1201]: time="2025-05-17T00:41:44.276956932Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:41:44.597499 env[1201]: time="2025-05-17T00:41:44.597246038Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45\"" May 17 00:41:44.604914 env[1201]: time="2025-05-17T00:41:44.598142549Z" level=info msg="StartContainer for \"e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45\"" May 17 00:41:44.694365 systemd[1]: Started cri-containerd-e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45.scope. May 17 00:41:44.798553 systemd[1]: cri-containerd-e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45.scope: Deactivated successfully. May 17 00:41:44.803054 env[1201]: time="2025-05-17T00:41:44.799475836Z" level=info msg="StartContainer for \"e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45\" returns successfully" May 17 00:41:44.873073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45-rootfs.mount: Deactivated successfully. May 17 00:41:44.903400 env[1201]: time="2025-05-17T00:41:44.903312961Z" level=info msg="shim disconnected" id=e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45 May 17 00:41:44.903400 env[1201]: time="2025-05-17T00:41:44.903389085Z" level=warning msg="cleaning up after shim disconnected" id=e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45 namespace=k8s.io May 17 00:41:44.903400 env[1201]: time="2025-05-17T00:41:44.903406307Z" level=info msg="cleaning up dead shim" May 17 00:41:44.939029 env[1201]: time="2025-05-17T00:41:44.938944412Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4128 runtime=io.containerd.runc.v2\n" May 17 00:41:45.283470 kubelet[1930]: E0517 00:41:45.281086 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:45.289846 env[1201]: time="2025-05-17T00:41:45.289805963Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:41:45.323602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670997611.mount: Deactivated successfully. May 17 00:41:45.349133 env[1201]: time="2025-05-17T00:41:45.349026397Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066\"" May 17 00:41:45.350218 env[1201]: time="2025-05-17T00:41:45.350144726Z" level=info msg="StartContainer for \"fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066\"" May 17 00:41:45.414830 systemd[1]: Started cri-containerd-fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066.scope. May 17 00:41:45.479141 systemd[1]: cri-containerd-fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066.scope: Deactivated successfully. May 17 00:41:45.492078 env[1201]: time="2025-05-17T00:41:45.491998957Z" level=info msg="StartContainer for \"fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066\" returns successfully" May 17 00:41:45.563813 env[1201]: time="2025-05-17T00:41:45.563676442Z" level=info msg="shim disconnected" id=fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066 May 17 00:41:45.564144 env[1201]: time="2025-05-17T00:41:45.564124155Z" level=warning msg="cleaning up after shim disconnected" id=fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066 namespace=k8s.io May 17 00:41:45.564241 env[1201]: time="2025-05-17T00:41:45.564222030Z" level=info msg="cleaning up dead shim" May 17 00:41:45.612612 env[1201]: time="2025-05-17T00:41:45.610026946Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4186 runtime=io.containerd.runc.v2\n" May 17 00:41:46.289120 kubelet[1930]: W0517 00:41:46.288980 1930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b670add_3a78_419b_82ae_1d0716ffdcdf.slice/cri-containerd-a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c.scope WatchSource:0}: task a444ad80698dfff2ec28ee15283a534601224bba812ecf4aa082112fe5d55a8c not found: not found May 17 00:41:46.318135 kubelet[1930]: E0517 00:41:46.318060 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:46.345603 env[1201]: time="2025-05-17T00:41:46.341433528Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:41:46.387353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083209450.mount: Deactivated successfully. May 17 00:41:46.409686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654625276.mount: Deactivated successfully. May 17 00:41:46.420077 env[1201]: time="2025-05-17T00:41:46.419992615Z" level=info msg="CreateContainer within sandbox \"867f4cae0c4b3600cadd55aa28edeea6519cefd1dc7ac213d987fb25d97e8b0f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5\"" May 17 00:41:46.420945 env[1201]: time="2025-05-17T00:41:46.420892722Z" level=info msg="StartContainer for \"77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5\"" May 17 00:41:46.442813 systemd[1]: Started cri-containerd-77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5.scope. May 17 00:41:46.520788 env[1201]: time="2025-05-17T00:41:46.520691629Z" level=info msg="StartContainer for \"77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5\" returns successfully" May 17 00:41:47.322783 kubelet[1930]: E0517 00:41:47.322322 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:47.426433 kubelet[1930]: I0517 00:41:47.426253 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lrtxh" podStartSLOduration=6.42622428 podStartE2EDuration="6.42622428s" podCreationTimestamp="2025-05-17 00:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:41:47.418525978 +0000 UTC m=+110.690348572" watchObservedRunningTime="2025-05-17 00:41:47.42622428 +0000 UTC m=+110.698046884" May 17 00:41:48.326255 kubelet[1930]: E0517 00:41:48.324632 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:48.391555 systemd[1]: run-containerd-runc-k8s.io-77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5-runc.MPzL24.mount: Deactivated successfully. May 17 00:41:48.545151 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:41:49.415851 kubelet[1930]: W0517 00:41:49.411121 1930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b670add_3a78_419b_82ae_1d0716ffdcdf.slice/cri-containerd-ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff.scope WatchSource:0}: task ad47a1743581710cb16266f6cc63890efd135818f4d61731373d24cb21f45fff not found: not found May 17 00:41:50.671667 systemd[1]: run-containerd-runc-k8s.io-77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5-runc.Bdd3nM.mount: Deactivated successfully. May 17 00:41:52.550311 kubelet[1930]: W0517 00:41:52.550263 1930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b670add_3a78_419b_82ae_1d0716ffdcdf.slice/cri-containerd-e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45.scope WatchSource:0}: task e8df66baca9e688ce6e793d5f36f3495c1173d90a96cb3d8518943b6b3d63b45 not found: not found May 17 00:41:52.907966 systemd[1]: run-containerd-runc-k8s.io-77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5-runc.C7WCtM.mount: Deactivated successfully. May 17 00:41:53.552787 systemd-networkd[1024]: lxc_health: Link UP May 17 00:41:53.566773 systemd-networkd[1024]: lxc_health: Gained carrier May 17 00:41:53.567158 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:41:53.732288 kubelet[1930]: E0517 00:41:53.731510 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:54.368488 kubelet[1930]: E0517 00:41:54.368451 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:54.824544 systemd-networkd[1024]: lxc_health: Gained IPv6LL May 17 00:41:55.375598 kubelet[1930]: E0517 00:41:55.373887 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:55.664391 kubelet[1930]: W0517 00:41:55.664348 1930 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b670add_3a78_419b_82ae_1d0716ffdcdf.slice/cri-containerd-fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066.scope WatchSource:0}: task fc1110c21016050ea88d9edbd99906fc28b3db811a76b0807293a80c722cb066 not found: not found May 17 00:41:56.822080 env[1201]: time="2025-05-17T00:41:56.821873726Z" level=info msg="StopPodSandbox for \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\"" May 17 00:41:56.822080 env[1201]: time="2025-05-17T00:41:56.821975839Z" level=info msg="TearDown network for sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" successfully" May 17 00:41:56.822080 env[1201]: time="2025-05-17T00:41:56.822015864Z" level=info msg="StopPodSandbox for \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" returns successfully" May 17 00:41:56.824367 env[1201]: time="2025-05-17T00:41:56.823145437Z" level=info msg="RemovePodSandbox for \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\"" May 17 00:41:56.824367 env[1201]: time="2025-05-17T00:41:56.823176436Z" level=info msg="Forcibly stopping sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\"" May 17 00:41:56.824367 env[1201]: time="2025-05-17T00:41:56.823234275Z" level=info msg="TearDown network for sandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" successfully" May 17 00:41:56.847230 env[1201]: time="2025-05-17T00:41:56.846989624Z" level=info msg="RemovePodSandbox \"2e816ed85fdfcdfc9f598b66109166f9a3ad2d305d01b2727ee166f53fc9ac9e\" returns successfully" May 17 00:41:56.854932 env[1201]: time="2025-05-17T00:41:56.854864857Z" level=info msg="StopPodSandbox for \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\"" May 17 00:41:56.855161 env[1201]: time="2025-05-17T00:41:56.855012246Z" level=info msg="TearDown network for sandbox \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" successfully" May 17 00:41:56.855161 env[1201]: time="2025-05-17T00:41:56.855050218Z" level=info msg="StopPodSandbox for \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" returns successfully" May 17 00:41:56.857982 env[1201]: time="2025-05-17T00:41:56.857494805Z" level=info msg="RemovePodSandbox for \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\"" May 17 00:41:56.857982 env[1201]: time="2025-05-17T00:41:56.857522066Z" level=info msg="Forcibly stopping sandbox \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\"" May 17 00:41:56.857982 env[1201]: time="2025-05-17T00:41:56.857597849Z" level=info msg="TearDown network for sandbox \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" successfully" May 17 00:41:56.863317 env[1201]: time="2025-05-17T00:41:56.863177718Z" level=info msg="RemovePodSandbox \"972fcd9d1a20b3d830017a5c5e8d8f3416f6017b5308a1d90126282e7dfeec11\" returns successfully" May 17 00:41:56.863815 env[1201]: time="2025-05-17T00:41:56.863768515Z" level=info msg="StopPodSandbox for \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\"" May 17 00:41:56.864140 env[1201]: time="2025-05-17T00:41:56.864054435Z" level=info msg="TearDown network for sandbox \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" successfully" May 17 00:41:56.864140 env[1201]: time="2025-05-17T00:41:56.864118966Z" level=info msg="StopPodSandbox for \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" returns successfully" May 17 00:41:56.865641 env[1201]: time="2025-05-17T00:41:56.864482904Z" level=info msg="RemovePodSandbox for \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\"" May 17 00:41:56.865641 env[1201]: time="2025-05-17T00:41:56.864505616Z" level=info msg="Forcibly stopping sandbox \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\"" May 17 00:41:56.865641 env[1201]: time="2025-05-17T00:41:56.864573414Z" level=info msg="TearDown network for sandbox \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" successfully" May 17 00:41:56.868790 env[1201]: time="2025-05-17T00:41:56.868682255Z" level=info msg="RemovePodSandbox \"44266935bfc43ecec6e00e2e2d66d4d008a79daf4acda082efc4b1bb3903ffb6\" returns successfully" May 17 00:41:57.513751 systemd[1]: run-containerd-runc-k8s.io-77f1105e05e2e0e5d090b61c3025820c8de9196b317341c48bc3e2723a7a19f5-runc.Q6ZXLw.mount: Deactivated successfully. May 17 00:42:00.022798 sshd[3812]: pam_unix(sshd:session): session closed for user core May 17 00:42:00.034962 systemd[1]: sshd@28-10.0.0.137:22-10.0.0.1:41050.service: Deactivated successfully. May 17 00:42:00.036085 systemd[1]: session-29.scope: Deactivated successfully. May 17 00:42:00.040056 systemd-logind[1189]: Session 29 logged out. Waiting for processes to exit. May 17 00:42:00.045590 systemd-logind[1189]: Removed session 29.